Wednesday, January 9, 2019

Scoreboard


Many teachers I know will lament the volume of assessment students are being asked to endure. Many, like me, take particular discomfort in the role we play as an agent of these assessments. Administer the test (pace the room, looking over shoulders, and saying impotently, "I can't help with you that," when asked for clarification from a student, while replaying an episode of Friends in your head), evaluate the results (Chart it! Graph it! Bop it!), and use the pages and pages of individualized data points to inform your teaching. Rinse and repeat.

Even as I rebel against this process, I play a role in it. Employing the classic two-year-old's "go boneless" method won't help me here. I mean, I've considered it. Instead, I decided to investigate the intentions of the assessments themselves and consider how I can balance staying true to that intention and to my own philosophy.

One assessment I am asked to implement is the Fountas and Pinnell Benchmark Assessment System. I can trace my F&P experience back to college. Through my entire teacher training and career these names have stood as beacons on the path of teaching children to read. And, yet, I was getting constantly frustrated by the way I was being expected to use the F&P levels. You can imagine my relief when, about a year ago, my Twitter feed held an article titled "Fountas and Pinnell Say Librarians Should Guide Readers by Interest, Not Level". This interview with School Library Journal solidifies the argument against leveling libraries and asking students to select books based on their reading level (Lexile, AR dot, etc.) from the very ones who created a system for the levels. Ah ha! (Enthusiastic fist pump.)

Fountas and Pinnell clarify that their intentions were to create a system to determine a child's reading level that would then be placed "in the hands of people who understand their complexity and use them to make good decisions in instruction". They reject the practice of informing students and parents, leveling the classroom and school libraries, and using levels to determine report card grades.

What, then, do F&P (as we affectionately call them) want us to do with the letter score each child receives from the hours spent administering the BAS? (Because it had better be worth it, man. This thing is a beast.)

They want this to be a teacher's tool.

They want us to learn what the levels mean. To dive into the complex analysis of the text levels, what they offer to readers and what they ask of readers.

They want us to learn to use the language associated with a text level to facilitate growth in a child's reading.

They want us to make our teaching of reading more precise through individual and small group instruction.

They want us to have some help finding appropriate materials to offer a challenge to learners that will help them grow but not frustrate, overwhelm, or bore them.

This is not an achievement tool. "Achievement" a loaded word, certainly so in the education world, but in it's simplest form, a word that means to have accomplished something. A student's derived reading level is not an achievement, and certainly is not more so when they score a P rather than a J. The reading level information we are able to derive from the F&B BAS helps teachers teach.

Therein lies the role that goal-setting can play in this process. Goals for teachers to be more intentional and informed in our reading instruction. Clearly it's safe to say Fountas and Pinnell would reject the practice of setting reading level goals with students, charting their growth, and rewarding a reading score achievement. And yet, this practice is rampant in response to standardized test scores.

The other heavyweight in my experience is the NWEA MAP Growth Test. Our students sit for this assessment three times a year, taking three tests (Reading, Math, and English Language Arts) for blocks of time ranging from 45 minutes to 1 1/2 hours.

As with the F&P BAS, I want to set aside my baggage about this test and first examine the intentions. This one's a little tougher for me, it's true.  As a norm-referenced, online, multiple-choice test, and a soldier in the army of standardization, the MAP immediately gives me the uh-oh feeling.

Even after researching the intentions of NWEA's MAP Growth Test, the language symptomatic of a "compulsion to compare" is evident. The norm-referenced test allows us to look at each student, and then the class, school, and county, and compare a score against the average. Are you better than, worse than, or just about the same as everybody else?

What I did find comfort in was the intention of learning versus achievement. Like Fountas and Pinnell, the intentions of the MAP Growth is not meant to be treated as an achievement scale, but a growth measurement. Many places, my home school for one, have their sights set on the achievement. Even growth can be translated to achievement, when we are required to set growth goals and determine whether or not students achieve those goals.

Again, this information is not meant to be placed in the hands of students, or their parents, as a means of judging their school performance. This is a teacher's tool to support instruction.

Using these tools as a measure of achievement, associating them with student goals, and labeling the students with the letters and numbers resulting from these assessments is not in line with the original, and heavily researched, intentions of the tools. Instead we can step away from the number or letter generated by the test and zone in on what it means for us as teachers working with the unique human being in front of us who is capable of learning new things. In its simplest form these tests say, "Here is what some of those things are."

No comments:

Post a Comment

Know Yourself, Know Your Students: How to use personality frameworks to humanize your classroom (part 4)

Using the personality frameworks in your classroom Over the past two months, I’ve been writing and thinking a lot about the work I did to us...