S1 E18 - Reasons the Wheels Might Fall off Your Whole School Approach - Part 2

photo-1504868584819-f8e8b4b6d7e3



Subscribe to the Podcast


Transcript

 00.00
Introduction
Hi there, it's Jocelyn here. Welcome to the Structured Literacy podcast coming to you from the beautiful home of the Palawa People, Tasmania.

In our last episode, I shared the first reason that the wheels might be falling off your literacy improvement efforts. and if you haven't listened to that episode, all about the need for strong, decisive leadership, I'd suggest that you go back and have a listen when you have a moment.

00.27
Are your data ducks in a row?
We all put so much effort into our literacy improvement efforts, and it's disheartening when that doesn't result in measurable growth for our students. This series of four episodes aims to help you reflect on what is happening in your school and to isolate and improve areas that are holding your students' results back.

In this episode, I'd like to discuss the importance of getting your assessment and data ducks in a row. One of the common situations that many schools find themselves in is having too many, not enough, conflicting, or ineffective assessments in place.

01.09
What should we be assessing?
I'll start this discussion with a few points about what we should be assessing in the early years and upper primary. In the early years of primary school and until students have acquired knowledge of the full alphabetic code, it is vital that we monitor student knowledge in phoneme-grapheme correspondences. While not the whole picture of a student's reading progress, this knowledge and the student's ability to blend and segment are the heart of foundational skills of reading.

Yes, of course, oral language and general vocabulary underpin all literacy. But those are unconstrained biologically primary skills, and as such, we don't have simple, reliable ways to monitor our students' progress in their development. Speech Therapists will have tools to identify developmental milestones in oral language, but that's not generally our role as classroom teachers.

The biologically secondary nature of acquiring phonics means that careful monitoring is important. We can't just teach phonics and hope for the best. Unless we can definitively answer the question, 'how many graphemes does this student know?',  'Can they blend and segment with these graphemes?' and 'What is this student's rate of acquiring phoneme-grapheme correspondences?' We can't properly evaluate the effectiveness of instruction.

02.30
Why you need good data.
Many teachers in schools will tell me that they collect this kind of data, but the schools who are most effective in furthering their whole school's results have a collective approach to this work. They use the same tools, assess at the same time, have a common language and understanding of student growth and actively use their data to evaluate practice.

They don't just collect data. They use it in conjunction with high expectations of student growth to, not just hope for but to, ensure student outcomes. All of these components are necessary for great results, but this isn't the total picture of great phonics assessments.

Schools who approach this effectively are also clear about which assessment tool or tools align most directly with their instruction. Using a phonics monitoring tool that does not align with your chosen program scope and sequence creates unnecessary complexity and reduces the teacher's capacity to evaluate practice, and use their data to inform the next steps in teaching. After all, that's the whole point of doing the assessment in the first place.

So my recommendation in this area is to make sure that your phonics monitoring tool matches your chosen program. If your school is using a pre-prepared or commercial program, it should come with an assessment for phoneme-grapheme correspondences, and word-level reading that you can use.

04.00
Did you ditch levelling or benchmark assessments?
The next point I'd like to make about assessment is that while many teachers in schools have ceased to do benchmark reading assessments, they haven't truly let go of levelling.

Increasingly, I'm seeing schools using a phonics program to teach the nuts and bolts of decoding, but using the assessment that aligns with their decodable texts to find out which books the student should be reading. In some cases, they are then using this data from the decodable text assessment to group students for instruction in the same way that they used to do with a benchmark reading assessment. This creates a whole mishmash of messiness, (and I challenge you to say that 10 times really fast)  because the sequence of graphemes followed in a decodable text series may not match the sequence of graphemes in the phonics approach.

When this happens, we are not able to be truly targeted in instruction, which is a pivotal condition that needs to be met for quick, effective learning of phonics. Some children will be okay with this mismatch. But for many, it places unnecessary pressure on their cognitive load. Preventing them from experiencing the necessary number of repetitions in the new material that they need in order to permanently embed it into their long-term memory.

So if you need 45 repetitions with a phoneme-grapheme correspondence, and you only get 30, no matter how much you try, you just won't develop automaticity in recognizing and recalling the graphemes. If your teacher then uses an assessment for decodable text series to decide on instruction and the books that you are reading do not contain the graphemes you know, that makes learning to lift words from the page pretty tricky. The problem here isn't that the decodable sequence doesn't match the phonics approach, but that there is a misalignment in the tools being used.

00.00
Alignment is key.
If you have several series of decodables in your school, and I think that it's a good idea that you do, each one will have its strengths, and then you'll have a broad range of text to draw on, you will need to align them with your phonics approach's sequence of teaching. Don't try and put the sequences together and adjust things in some way to dovetail everything. It simply doesn't work. Choose a sequence and align everything to that.

If you are one of the teachers in schools using Reading Success in Action, you'll have this information at your fingertips as I have already aligned several series to the teaching sequence, and we have plans for more to come. Schools who make this simple for teachers, then go on to organize the texts in their school into the sets that align with Reading Success in Action, making further assessments to  "Place students with texts" completely unnecessary. Because when the box says decoding two set four, you know that if the children are up to that point, these are all of the texts that they can read.

If you are using another tool to guide your phonics and decoding teaching or another program, simply look inside the decodables you have and work out where they sit in your program scope and sequence according to both graphemes and irregular high-frequency words. Now, this becomes less critical as children get closer to developing full knowledge of the alphabetic code, and they can engage in what's known as self-teaching.

But in the early stages of learning to read, and if you are a vulnerable learner, it's really important that there is strong alignment between the phonics you are learning and the books you are reading. Doubling up on assessment tools just makes it harder to make this happen.

07.45
Over-reliance on oral reading assessment
The next thing that we need to discuss is the way that we tend to rely way too much on oral reading assessment to answer all of our questions about a student's progress. An oral reading assessment can tell us how quickly and accurately a student is reading with how much expression the student reads and whether the student has grasped the basic ideas of the text. It cannot tell how deeply the student could think about the text; if they had background knowledge of the content, if the student can infer in a general sense, or where the student is up to in learning about the alphabetic principle.

This thing about inference and these deep questions is something that we really need to challenge. Inference is heavily dependent on vocabulary and background knowledge. If a student cannot answer the inferential questions after reading, we actually don't know whether they can't infer in general, or whether they simply don't have the necessary knowledge of the ideas and language in the text to do so. At best, we are guessing.

An oral reading assessment also won't help you tick students off for the other content descriptors that we find in the curriculum, such as making text-to-text connections or thinking about the author's intent in word choice or what the interplay is between images and text. These things are addressed in other ways, such as through a text-based unit,

00.00
We need to make a shift in our thinking.
I think we have the ratio of importance of the two different forms of assessment switched around the wrong way. For so long, we have placed the majority of importance in assessing reading on the benchmark reading assessment, with the other work from our direct lessons and units taking a backseat when it comes to reading comprehension and understanding text.

What we're trying to do now, in many cases is simply have our DIBELS or our Acadience take the place of the benchmark assessment that was never fit for purpose in the first place. Instead, I think that we need to recognize that an oral reading assessment is one part of the puzzle, but it's not the major one, especially as students progress through the years of school.

Of course, when students are first learning to read, a large part of our focus goes on those important steps of moving from decoding, where we're sounding out sound, by sound to word recognition, and into fluent reading. However, once the basics of decoding are done and dusted, it's time to look more widely.

10.12
Let me clarify.
I'm not for a minute suggesting we shouldn't be having students complete our DIBELS or our Acadience, or our NEALE analysis of reading ability in the upper primary years. I'm simply saying that when we are looking at a student's overall reading development, there is a lot more than that to consider.

There actually isn't an off-the-shelf tool that is going to give us the information we're looking for. We need to learn to tie classroom-level assessment into our teaching and plan for instruction in a way that creates alignment between the achievement standard, the content descriptors, or the outcomes and indicators if that's where we live and what students are able to do.

10.55
Are you over-assessing?
If relying too much on oral reading assessment keeps our focus too narrow, then this next situation is an example of going too broad in a bid to better understand our students' reading development and to be data-driven. Many schools over-assess their students with tools that really don't add much to teacher understanding. It's almost like people are trying to assess their way out of confusion. If I don't feel like I have a handle on how to monitor growth without a benchmark assessment from a levelled reading scheme, I'll just do another assessment in the hope that a light bulb will go off for me, and all will become clear. That's a situation many of us find ourselves in, and if that's where you've been, please don't beat yourself up. It's wonderful that you are wanting to learn more about your students and how to help them.

11.43
How you can question your assessments.
Before electing though to take on another assessment and before embarking on another round of assessment with your current tools, I really encourage teachers and schools to ask and answer the following questions:

11.56
No. 1 - What questions do we need our assessments to answer?
What questions are we trying to answer with the assessment that we are conducting? When you can answer that, then the rest will become so much easier. Because the purpose of assessment varies. The purpose of our DIBELS or our Acadience assessment is to identify which children may be at risk of reading failure. The purpose of our phonics monitoring assessment is to let us know what phoneme-grapheme correspondences the child can recognize and how well they can read with them in words, regardless of age. Know what questions you're trying to answer. That's the first thing.

12.30
No. 2 - Do any of our other assessments already provide that data?
Number two is, do we already have an assessment in place that answers the questions we're trying to answer?

12.35
No. 3 - Will the assessment tell me something new?
Number three, what is this assessment going to tell us that we don't already know? And let's be clear, I am not talking about how well we feel in our heart that we know our students. We absolutely have to have data that gives us information, but we may find that we are doubling up and overlapping unnecessarily.

12.56
No. 4 - Does it do what we think it does?
Four. Does this chosen assessment actually measure the thing we think it will? For example, will having every student complete a computerized multiple choice reading assessment actually tell us which students know their phoneme-grapheme correspondences or not?

13.11
No. 5 - Does every student have to complete the assessment?
Does every student have to complete the chosen assessment? Is there really value in testing every year two student on phonemic awareness when it's clear that they can already read and spell? Is it worth performing a text-level assessment with foundation students who don't know the code beyond the first 12 graphemes? Well, the answer is no, to that one, by the way. What is the cost-benefit ratio between the time it will take to administer this assessment and the information that we will gain from it? Fundamentally, is it worth it? Are there formative assessment practices that we could use throughout instruction to help us monitor student progress instead of taking every student aside every single week?

13.53
No 6 - How you can get real-time information about student progress.
Checking for understanding with mini whiteboards, using exit tickets, observing during a daily review, and potentially identifying a small number of focus children who may be at risk that you check in with regularly can take the place of intensive assessment for every student all the time. This will also give us real-time information about how much of our teaching is sticking for the students, and enable us to pivot and adjust as required. Remember, the pace of instruction and complexity of instruction is about responding to the needs of your students, not about ploughing through the scope and sequence.

14.32
No 7 - Check alignment between reading assessment tools and achievement standards.
What is the alignment between our reading assessment tools and the achievement standards we assess from? How reliably can we say that if a student can perform well on an assessment tool, they get to be awarded an A, B, or C in reading? Now, for the most part, there is zero alignments between the achievement standard for our grades and these tools. In the past, we believed that reaching a certain spot in our level text series equated to grades. It didn't.

If you are a Resource Room member, you have access to the recording of our recent monthly mastermind, that was all about assessing for reporting, where we cover how you can use your current tools to help inform your grades.

15.13
Are you over-assessing? Continued
The other consideration with over-assessing is that if we have too many assessment tools on the go, we can't possibly use the data that's being collected effectively. This takes time, and it takes effort, and it takes headspace. If we have too many tools fundamentally, we're assessing for the sake of assessing and putting a lot of pressure on ourselves and our students.

15.35
No. 8 - What do I do about the mandatory assessments?
I fully appreciate that when it comes to assessment, there are decisions that may be out of our hands, even at a school level. Your system leadership may insist on particular assessments that you know are not effective in measuring progress and planning for student instruction. In that case, do what you need to do in the least invasive way possible.

For example, if you are required to submit benchmark assessment data, and that is still happening today, have your two most skilled classroom teaching assistants do everyone's assessment over a week and reduce the burden on teachers. Submit the data, and then don't give it another thought.

16.15
No 9 - Using your data to plan for instruction.
The final part of the picture around getting your assessment ducks in a row is how data is understood and used. I think that we make some pretty big assumptions about teachers' knowledge about how to firstly interpret data, and, secondly, how to use what we learn to plan for instruction.

Potentially saying, just use the data you plan for instruction is like asking the apprentice to use the plans to build a house. Now, I'm not saying that our teachers are apprentices, but if we haven't had the experience and the learning around how to use the data effectively, then what we do is not going to be as effective, and teachers deserve time, and they deserve to have their capacity built alongside them as professionals so they can get the best for their students.

17.00
Get your whole team on board the bus.
Everyone in the team should be walked through the process of looking at data and using it to make decisions about teaching. This includes classroom assistants, and the members of our team who provide intervention. Classroom assistants need to have an idea of where students have been, where they are now, and where things are headed. Your instruction together will be so much more effective if your whole classroom team is on the same page.

It's incredibly rewarding to me to go into schools and have classroom assistants who are active co-teachers in the classroom say to me things like, " I was looking at the student data for these particular students, and I noticed some gaps in these phoneme-grapheme correspondences, do you think I should go back and fill them before we move on?". When we build the capacity of every member of our team, we give our students the very best chance to learn well.

When it comes to the teacher or teams who conduct an intervention, they really should be using the same tools as the classroom teacher, plus anything else they need to drill deeper into where the students are up to. Two different people using different assessments to determine the starting point for instruction for one child is a recipe for overwhelm for the child, and I guarantee will not give you the same quality of outcomes as if you were all on the same page. Having one approach for Tier one and another for Tier two that follow different scopes and sequences is a very common situation, so if this is your school, it might be time to have a discussion.

18.39
Want to know more?
If you'd like to read more about getting your assessment ducks in a row, I have shared the links to two blog posts that we have previously published in the show notes for this episode that you can find at www.jocelynsemaeducation.com

00.00
In conclusion
I hope that this week's podcast episode has helped you reflect on how you can ensure that your assessment is fit for purpose, that you aren't doing more than you need to, and that, ultimately, your assessment efforts lead to student outcomes. If we want strong growth for children, we want to make sure that our teaching supports not only our students but also us in our practice, we need to get this right. 

In next week's episode, I'll be covering the third reason that the wheels might be falling off your literacy approach. 

See you then. Bye. 

 

Not a Resource Member yet, but are curious about what it can offer? Click here

0 comments

There are no comments yet. Be the first one to leave a comment!

Leave a comment