Wednesday, January 23, 2019

#10yearchallenge

Lately I've seen many posts from the #10yearschallenge on my Facebook and Instagram feeds. And while my immediate response to posting side-by-side photos of myself was a fervent, "Ugh. No thanks," at my phone screen, it did get me thinking about where I was ten years ago and how I've changed over that time. 

Ten years ago I was 5 months away from marrying my husband. I was practicing yoga everyday and teaching yoga classes a couple nights a week. I had stopped eating animal products and had started learning how to cook vegan after a concerning cholesterol report. And I was in my 4th year of teaching, my first in 5th grade.

Since then I've gotten married. I've had a baby. Instead of getting up for a vigorous yoga practice each morning, I am leashing up a rambunctious catahoula leopard dog named Kyber, to run off some energy before we leave for the day. I'm teaching 4th grade. And I'm sticking my toes in the water of being a writer.

Of all the things on the list, being a teacher seems to be the one constant. I'm in the same district, at the same school, teaching in the same elementary age range. The teacher across the hall who became my best friend that year as we stood in our doorways every morning to greet students, sharing greetings, jokes, and exasperated sighs with them and each other is still my favorite person to work with. I still have the same desk chair. It's still terrible.

But my day-to-day teaching life has also undergone big changes.

If I look back on my teaching philosophies of ten years ago, the big picture is still pretty much the same. But a lot has changed in the execution.

Some practices I've abandoned in the last ten years:
  • Homework: I would assign it, check it, and grumble when it wasn't done. I spent time trying to differentiate assignments, only to find they still weren't completed. Games, bribes, and consequences. I tried them all. Ten years later, I stand firm against homework. I ask only that my students read nightly; no logs, summaries, no time requirements, or parent signatures. 
  • Assigned seats: It wasn't long into my first 4 years that I abandoned the matching nametags affixed to various desk configurations. I remember trying groups, semi-circles, and rows. Eventually I abandoned desks for tables. With grant money I now have standing desks, wheeled chairs with adjustable work tables, yoga balls, floor cushions, stools, bean bags, a rocking chair, small group tables, and a loft.  
  • Behavior rewards/consequences: When it comes to classroom behavior, I am still trying to find my management/conformity balance, but I've mostly eliminated rewards and consequences. I try to employ logical results (maybe a change in seating to get back on track or popcorn and a movie to celebrate finishing a book group) and communicate with students through personal notes and talking one-on-one rather than the clips, cards, or elaborate point earning/spending systems I've tried in the past.
  • Standardized test strategies: Gah. Remembering this one hurts a little. Ten years ago we were in the thick of CRCT, and test-taking strategies were all the rage. Every January we'd start a schoolwide test-training, uh I think we called it a game... but just cause you put syrup on it, doesn't make it pancakes, amiright?... Students would track FOCUS points for increasing blocks of time over several weeks, performing meaningless, ungraded practice tests intending to build up their ability to sustain focus for an extended period of time. When the class reached a certain number of focus points there gold coins affixed to doors and prizes galore. Of course, who needs to practice for months on a once-a-year test when we can take them 9 times a year instead? I'm pretty certain we're still on the wrong side of this testing thing, but ten years ago, I thought I was doing what I should be doing to help my students succeed.
Even as I look back and cringe at these practices, I know I was doing them with the best intent. A combination of doing what I was asked/expected to do, what worked for the people who knew better than me, and what I thought was just how it's done.

Ten years later, I know that nothing is "just how it's done" no matter how ingrained a practice seems to be. There are always different ways to do things, but it's almost never easy to do the different thing. With ten years comes the confidence (or blind gumption) to take risks, to say, "I'm going to try this instead." There are mountains of books atop endless fields of research on teaching practice because we are constantly trying to figure out ways to better serve the kids we teach and each other. 

It's taken me ten years to come to and act on conclusions I've made about these four practices, I can only imagine what I'll think today's me in 2029...

Sunday, January 20, 2019

Skate Better

A couple of weeks ago I posted an entry about how assessment tools can be used best by reflecting on the intentions behind the tool. Accidentally I went down the achievement rabbit-hole while I was working on it. My entry kept getting longer and longer going in multiple directions. I kept finding more and more research to read. I decided to cut myself off, post what I had, and come back to the topic later.

So I am picking up where I left off. Testing results. To sum up: Don't share with kids. Don't put it on report cards. Don't set goals based on levels or scores. The creators of the resources I examined, Fountas and Pinnell and NWEA MAP Growth, do not intend this use. Simple enough.

But what if it works?

Hang on, let me make a small edit: But what if it "works"?

I have been asked, then told, (and, in the end, reprimanded) to set goals with students. To set score-based goals with students. Again and again, I have refused, citing my opposition to this practice, and yet the comeback I most often get is, "research shows that students improve when they set goals".

The Disney movie, Brink, was released in 1998. I totally remember it. (Let's assume it's because my younger sister or kids I babysat watched it, and not because of a crush on Eric Von Detten.) but this scene sums up exactly what I think about this goal setting process.



It's as if all we needed to do was say, "Read better." Or "write better." Or "know more math skills." And the rest will fall into place.

No one actually believes that. So why does it seem classes and students who do this practice of setting goals for scores are getting results this way?

Alfie Kohn has something to say about the effects of grades and motivation that seems applicable here. In his post, A Case Against Grades, he cites research concluding that the motivation to get better grades (scores, essentially) is different than the motivation to learn. By placing the heaviest weight on the score derived from the assessment, research shows we are likely to be diminishing the learning itself, even if the student's grade (score) improves. By emphasizing the importance of a score to students, the focus is redirected from what the student is learning to how well he/she is doing. He says,
Even a well-meaning teacher may produce a roomful of children who are so busy monitoring their own reading skills that they're no longer excited about the stories they're reading."
Then there's the issue of the skills that can be most readily quantified. Sure we can get a number that supposedly measures a student's reading, but what about their reading are we actually measuring? Are we really comfortable only measuring that which can be answered by choosing one out of four possibilities? Of course not! And yet, when the emphasis is on scores, doesn't one run the risk of focusing on skills that can be measured in that way? In another zinger Alfie Kohn says,
Moreover, as Cuoco and Ruopp remind us, rising scores over time are often nothing to cheer about because the kind of instruction intended to prepare kids for the test - even when it does so successfully - may be instruction that's not particularly valuable. Indeed, teaching designed to raise test scores typically reduces the time available for real learning."
Of course, the test producers themselves won't agree with that entirely. In fairness assessment tools that have been created in the 20 years since this research was published (not the article though. The post is from 2011.) have made strides toward higher depth of knowledge questions, but higher depth of knowledge questions that can be answered by clicking a button, nonetheless.

In a post on their blog, the Vice President of Education Research at NWEA says,
The goal of our work and our assessment is to help educators help students improve their learning, not their scores. In other words, improved scores should follow improvements in learning, and not be an end to themselves."
I still have so much thinking to do here. So many questions about how to navigate these waters in my classroom and school. But even as I keep digging through the research and testing out new strategies, I can fall back on research that tells me I'm on the right track.

Now, just for fun, I'll leave you with this gem:








Saturday, January 12, 2019

Something Good

I've only just started this blogging thing and, even though I've only posted 3 times, I've lamented a lot. Granted, my musings are a reflection of what's going on with me at the present moment. That's kind of how this online journaling works, right? And my current state is so deeply entrenched in frustration due to testing and its side effects, scripted curriculums, data-driven busy work, and the like, that that's what's coming out when I sit down to write.

This week was no different. Worse, really. (I attempted to bring some of my points to the negotiating table. It didn't go well.) But something good happened this week, too, and this morning I want to talk about that.

My teammate and friend presented a short PD during our weekly grade level planning meeting this week. He attended a session with Beers and Probst on their book Reading Nonfiction: Notice and Note Stances, Signposts, and Strategies and shared with us three stances used to enrich students' reading on nonfiction texts. In the 20 minute talk, he explained the theory, we applied the practice to a shared reading passage he provided, discussed the results he's already seeing from its implementation in his classroom, and ended with some tangible steps to take back to our classrooms.

It was the most useful PD we've had all year.

It was also the first PD we've had that did not relate directly to implementing and analyzing assessments or interpreting a scripted program.

It was the first PD that has been what a PD should be.

Now I've been feeling increasingly grumpy and disgruntled during meetings all year. And while I walked into this one with a smile -I didn't want to make my teammate feel bad with my resting bitch face- inwardly I was feeling just as grumpy and disgruntled as I have been. But when he finished talking, the strangest thing happened, I felt... good. I felt a little energized. I felt like I had just learned something interesting and applicable that could positively impact my teaching. I had a new strategy to try and new path to explore in my own professional development journey.

The more I read, write, and talk about what is going on in my district and school, the more I worry I'm becoming a grumpy curmudgeon who's unsatisfied with everything. But I'm not. I love learning new things. I love picking up a new professional text and trying out new ideas. I love hearing from my friends and colleagues when they are all fired up about something new they've learned. And I enjoy authentic professional development.

I am excited to try out what I've learned and grateful for something good on a Thursday afternoon.

Wednesday, January 9, 2019

Scoreboard


Many teachers I know will lament the volume of assessment students are being asked to endure. Many, like me, take particular discomfort in the role we play as an agent of these assessments. Administer the test (pace the room, looking over shoulders, and saying impotently, "I can't help with you that," when asked for clarification from a student, while replaying an episode of Friends in your head), evaluate the results (Chart it! Graph it! Bop it!), and use the pages and pages of individualized data points to inform your teaching. Rinse and repeat.

Even as I rebel against this process, I play a role in it. Employing the classic two-year-old's "go boneless" method won't help me here. I mean, I've considered it. Instead, I decided to investigate the intentions of the assessments themselves and consider how I can balance staying true to that intention and to my own philosophy.

One assessment I am asked to implement is the Fountas and Pinnell Benchmark Assessment System. I can trace my F&P experience back to college. Through my entire teacher training and career these names have stood as beacons on the path of teaching children to read. And, yet, I was getting constantly frustrated by the way I was being expected to use the F&P levels. You can imagine my relief when, about a year ago, my Twitter feed held an article titled "Fountas and Pinnell Say Librarians Should Guide Readers by Interest, Not Level". This interview with School Library Journal solidifies the argument against leveling libraries and asking students to select books based on their reading level (Lexile, AR dot, etc.) from the very ones who created a system for the levels. Ah ha! (Enthusiastic fist pump.)

Fountas and Pinnell clarify that their intentions were to create a system to determine a child's reading level that would then be placed "in the hands of people who understand their complexity and use them to make good decisions in instruction". They reject the practice of informing students and parents, leveling the classroom and school libraries, and using levels to determine report card grades.

What, then, do F&P (as we affectionately call them) want us to do with the letter score each child receives from the hours spent administering the BAS? (Because it had better be worth it, man. This thing is a beast.)

They want this to be a teacher's tool.

They want us to learn what the levels mean. To dive into the complex analysis of the text levels, what they offer to readers and what they ask of readers.

They want us to learn to use the language associated with a text level to facilitate growth in a child's reading.

They want us to make our teaching of reading more precise through individual and small group instruction.

They want us to have some help finding appropriate materials to offer a challenge to learners that will help them grow but not frustrate, overwhelm, or bore them.

This is not an achievement tool. "Achievement" a loaded word, certainly so in the education world, but in it's simplest form, a word that means to have accomplished something. A student's derived reading level is not an achievement, and certainly is not more so when they score a P rather than a J. The reading level information we are able to derive from the F&B BAS helps teachers teach.

Therein lies the role that goal-setting can play in this process. Goals for teachers to be more intentional and informed in our reading instruction. Clearly it's safe to say Fountas and Pinnell would reject the practice of setting reading level goals with students, charting their growth, and rewarding a reading score achievement. And yet, this practice is rampant in response to standardized test scores.

The other heavyweight in my experience is the NWEA MAP Growth Test. Our students sit for this assessment three times a year, taking three tests (Reading, Math, and English Language Arts) for blocks of time ranging from 45 minutes to 1 1/2 hours.

As with the F&P BAS, I want to set aside my baggage about this test and first examine the intentions. This one's a little tougher for me, it's true.  As a norm-referenced, online, multiple-choice test, and a soldier in the army of standardization, the MAP immediately gives me the uh-oh feeling.

Even after researching the intentions of NWEA's MAP Growth Test, the language symptomatic of a "compulsion to compare" is evident. The norm-referenced test allows us to look at each student, and then the class, school, and county, and compare a score against the average. Are you better than, worse than, or just about the same as everybody else?

What I did find comfort in was the intention of learning versus achievement. Like Fountas and Pinnell, the intentions of the MAP Growth is not meant to be treated as an achievement scale, but a growth measurement. Many places, my home school for one, have their sights set on the achievement. Even growth can be translated to achievement, when we are required to set growth goals and determine whether or not students achieve those goals.

Again, this information is not meant to be placed in the hands of students, or their parents, as a means of judging their school performance. This is a teacher's tool to support instruction.

Using these tools as a measure of achievement, associating them with student goals, and labeling the students with the letters and numbers resulting from these assessments is not in line with the original, and heavily researched, intentions of the tools. Instead we can step away from the number or letter generated by the test and zone in on what it means for us as teachers working with the unique human being in front of us who is capable of learning new things. In its simplest form these tests say, "Here is what some of those things are."

Wednesday, January 2, 2019

New Year, Same You

Goals.

That's what New Year's Resolutions are all about right? Setting a goal weight. Keeping a cleaner house. Packing healthier lunches. Exercising everyday.

I looked up New year's Resolutions on Pinterest and found pins like "50 Resolutions You Can Actually Keep" and "The Ultimate Resolutions List"  and "70 Resolutions to Set in 2019". Gah.

It's been about 5 years since I resolved to stop setting New Year's Resolutions. I'm all for goal setting. I just don't jive with throwing out 85 of them on one earmarked day and theoretically maintaining them for the next 364 days.

Right now I am battling for a better understanding of goal setting in our school. Currently the word "goal" is synonymous with "score" which is stirred in with "growth" and has an "achievement" chaser. This vicious cocktail leave behind a nastier hangover than any NYE party mixer could.

After completing the midyear county-wide, computer-based, smart program growth assessment, students' scores are doled out, sometimes charted or graphed, and compared against the "goal score" that was set following this assessment at the beginning of the year. Scores and, thus the students themselves, will meet, exceed, or fall short of the set goal. Celebrations, condolences, or reprimands are delivered, and the process starts over again with new goal score for the end of the year.

The "goal" is a numerical score determined by a formula for (universally) expected growth that will determine a student's perceived achievement.

I've been battling this practice in my school for a few years. Ultimately, here's what it comes down to for me: Students -children, learners- should not bear the burden of a number equivalent to their abilities as a reader, writer, or any other subject. If we must have scores (and must we, really?), that score is but a factor contributing to my, the teacher's, knowledge of what a student understands or needs to learn. I take it, consult what it tells me against what I know of this student as a learner, and proceed to use it as I see fit to inform the what, how, when, and why of that student's learning.

Over the next few posts, I will go into more detail about this process. I'll share my resources and methods for using test results to inform my teaching, because I believe it can. This can be done without pressuring, bribing, threatening, or indeed discussing numerical scores with students at all. Results can guide decisions I'll make for whole class, small group, and individual instruction. I will share what I've tried, what I've avoided, and what I've learned from both.

I also want to come back to this idea of goal setting with students. If this score-based goal setting is irresponsible and detrimental to students' development as learners, as I think it is, how then may goal setting be done to genuinely support students' growth as learners? Specifically how can metacognition and growth mindset strategies help students better understand themselves as learners, therefore setting goals that specifically support the process, not the result?

Based on conversations I have had with fellow teachers from other schools and districts, this specific practice is not universal, but the addiction to scores does seem to be. What has been your experience? How does your school or district respond to scores? I can somewhat safely assume that the testing portion is widespread. Most places have adopted a test to predict the result of the test. The latter being the state mandated high-stakes test. What has been your experience with goal-setting for students? How has it been modeled, encouraged, or required in your school or district?

Most importantly, what goal-oriented work have you done in your classroom? How have you seen this affect your students and their learning? How have you felt about it afterwards? In what ways did it challenge or support your ideals as a teacher? This issue carries a lot of weight for me, as it has been the hill I've repeatedly vowed to die on when it comes to my classroom (granted I've got many hills).

As I look into the face of a new year, dense with the same issues, I resolve to stop shouting into the wind about how poisonous I think this practice is. I resolve instead to research, employ, and share what we can be doing to responsibly, respectfully, and effectively set learning goals for and with students.





Know Yourself, Know Your Students: How to use personality frameworks to humanize your classroom (part 4)

Using the personality frameworks in your classroom Over the past two months, I’ve been writing and thinking a lot about the work I did to us...