Testing, Testing, 1…2…3…
Recently, my high school participated in the Colorado Student Assessment Program. I thought that, perhaps, it might be enlightening for those outside the field of education to see what that process actually looks like in a school.
Here’s the schedule for the week, actually copied from the email sent to the entire staff:
CSAP WEEK IS HERE! WOOHOO!!!!
MONDAY – CSAP Test book prep and training
Training in Theater
12:00 – 1:30 Freshman Proctors
1:30 – 3:00 Sophomore Proctors
Book Prep in D102
12:00 – 1:30 Sophomore Proctors
1:30 – 3:00 Freshman Proctors
Helpful hints –
· Please be ON TIME for the training and book prep
· Please do not arrive EARLY to prepare your CSAP books.
· Please print off your check-out sheet (see attached) prior to attending the afternoon session.
9th & 10th Grade Testing Order: Writing 1, Writing 2, Reading 3
9th & 10th Grade Testing Order: Reading 4, Writing 5, Reading/Writing 6
9th & 10th Grade Testing Order: Mathematics 1, 2, 3
10th Grade Testing Order: Science 1, 2, 3
9th Grade Proctors – Report to D102 for test book clean-up/back of book bubbling party
There are NO freshman on Friday
A couple points of clarification on the contents of the email:
1. We, as teachers, have to sign off on “CSAP refresher training” and sign a confidentiality notice every year, hence the training in the theatre.
2. “Book Prep” involves us writing the names and student ID numbers on all test books, drafting booklets, and other state-controlled items.
3. “Test book clean-up/back of book bubbling party” requires the proctors of freshmen CSAP groups to go in and clean “stray marks” from CSAP books so that no book gets rejected because of a kid accidentally marking outside of the prescribed area, and to bubble in the student information on the back of the CSAP test books themselves. They don’t let the students do it.
But wait! There’s more…
So, schedules in hand, after our morning session of professional development and pep talk from the superintendent (a subject which might become a diary all its own), the freshman proctors head en masse to the theatre to receive our direct instruction about how to proctor the test. Unbelievably, it is different this year. Normally, once our group of sullen kiddos is in the room, we follow the script in the proctoring manual, which says that after students complete the test, they may read a book. Not so this year. Nope. We have permission from the district to omit “…you may read, but you may not do writing” from the directions. Reason being, at least in the theory of the person responsible for the test, “if they get bored while sitting there, they’re likely to open the test and go back over their answers.” Now, I can see some logic in that, but in reality, we’re supposed to read the script exactly as it appears, or we risk invalidating the test.
Well, no big deal, right? So, then we were on to Day One of testing, which, based on the bevy of questions posed to the facilitator during her typographical-error-addled PowerPoint presentation (one that was too good not to mention was that instead of “Keep your door ajar”, she typed and presented a slide that said, “Keep your door agar”), was bound to be a circus of magnificent proportions.
Day One went fairly smoothly in terms of operations – surprising though that was. I confiscated several phones, had to keep students from accessing their backpacks, and gave students their raffle tickets for showing up. Also, I managed to read many things I’d been wanting to for some time. First, one of those things was Valerie Strauss’ recent blog post “The ed report that all ‘reformers’ should read.” As I read a hard copy, because during test administration we cannot use laptops or any electronics, I kept commenting, starring, and underlining certain things as my irritation grew. The gist of the report she referenced was, in her words,
But the report says that there are so many problems with the test scores that an overall rise in the numbers cannot be pointed to as proof of student learning. — my emphasis
Well, duh. Not “duh” to Valerie Strauss, or “duh” to the report-writers, but “duh” to the people who keep trying to use the results of standardized tests to determine student achievement, and now, teacher effectiveness. Not only that, but
”Because DC is a highly mobile district and the student population changes every year, score fluctuations may be the result of changes in the characteristics of the students taking the test, rather than improvements or declines in students’ knowledge and skills.”
So, as many of us have pointed out all along, the test results give us comparisons of apples and oranges, or they show us the achievement data of different students. I’m all for being held accountable for student learning, but how is that really possible if there isn’t appropriate data to show improvement or loss of learning in a specific student group for which I am responsible?
I finished that article; then, as is part of my proctoring duties, I took a stroll about the room to ensure that students weren’t doing anything out of the ordinary to their test booklets (despite the fact that I wished ever so much to see one of them writing “I prefer not to take your test” in some of their blanks), and I saw a couple of the writing prompts, which I unfortunately cannot reproduce here, or I would likely lose my job despite the use of a pseudonym.
As I’ve mentioned in previous diaries, my school has a 75% free and reduced lunch population. In my room, I had a small group of 11 students, classified as between NEP and LEP, meaning that their skills in English range between Non-English Proficient and Limited-English Proficiency. Because of this, they received two accommodations during the test: the use of word-to-word Spanish-to-English dictionaries, and extended time to process the information. I also would like to clarify that I do not mean for this discussion to turn toward the “They need to learn the language” debate. I am providing background information about the students in my room to provide understanding of the fact that, despite their lack of mastery of the English language, their tests will be graded the same way as every other 9th grade student’s test. Every other. Including the kids who go to schools that have a mere 5% free and reduced lunch population. Including kids who have a complete mastery of the English language. Including students with other accommodations for special education services, for processing disorders, etc.
All of this is to say that, given the writing prompts, and given story titles like “Hamish Mactavish is Eating a Bus,” the students for which I proctored the exam have little to no chance of scoring anywhere near what other students without a linguistic disadvantage manage to score.
That brings me to my next reading, the paper “Are ‘Failing’ Schools Really Failing?”
The abstract of the paper states,
“Evaluating schools on achievement mixes the effect of school factors (e.g. good teachers) with the effect of non-school factors (e.g. homes and neighborhoods) in unknown ways. As a result, achievement-based evaluation likely underestimates the effectiveness of schools serving disadvantaged populations”
The body of the paper goes on to say,
”Yet we will show that if we evaluate schools using learning…– i.e. if we try to isolate the effect of school from non-school factors on students’ learning – our ideas about “failing” schools change in important ways…These patterns suggest that raw achievement levels cannot be considered an accurate measure of school effectiveness…As long as school quality is evaluated using measures based on achievement, accountability-based school reform will have limited utility for helping schools to improve.”
The paper goes on to discuss different advantages inherent in professional families, including the number of words per hour spoken to a child in such homes (2153) vs. the number spoken to them in a welfare household (616), but it all boils down to the idea that yes, Virginia, there is a difference between the kids who attend my school, and the kids who attend some of the more affluent districts in my state.
All of this is to say that, while reading these pieces of work and others as my own little act of rebellion while proctoring the test, I learned what I already knew. Like Winston Smith reading the book, of The Brotherhood, “the information had not told [me] something I didn’t already know; “it had merely systematized the knowledge that [I] possessed already.”
The problem here, though, is that if there is so much (educationland buzzword alert!!) data out there to support what I see in front of me every day, why aren’t more people listening? Why are federal and local governments still imposing this form of standardized achievement measurement on a system that is obviously defies any sort of standard definition?
Well, that brings me to Alfie Kohn’s “The 500-pound Gorilla.” Basically, all of the standardized testing industry, and all of the attempts to turn children into production-focused widgets rather than critical thinkers, leads back to McGraw-Hill. So, really, we just have to follow the money to the private industry that will one day eye my students as potential employees. The push to make students more effective workers comes from the fact that “people in the public sector are uncritically adopting the world view of the private sector – and applying it to schools.” So, what has happened, and what I referenced in a previous diary about testing kids to their educational death, is that a “business ethos” has taken over education, “with an emphasis on quantifiable results, on standardized procedures to improve performance, on order and discipline and obedience to authority.”
The above mentioned emphasis on order and discipline, as I referenced in my last diary, results in an almost completely unrecognizable education system. Or, as Alfie Kohn said,
”None of this is particularly effective at preparing children to be critical thinkers, lifelong intellectual explorers, active participants in a democratic society.”
Big Brother would be so proud.