S6 E17: How to Streamline Your Assessment Schedule

Hello, hello, welcome to this episode of the Structured Literacy Podcast recorded right here in Tasmania, the lands of the Palawa people. I'm Jocelyn, and today you are going to find answers to the question: How do we get our assessment schedule in order? When I'm working with schools, one of the to-do list items is often to define the assessment schedule for literacy. It's common for a school's assessment schedule to be really, really full. Leaders and teachers know instinctively that they probably shouldn't be doing all the things, but knowing how to cut down is another thing entirely.
Now, I've written several times about assessment over the years, and I have a whole chapter of my book devoted to it. So if you have a copy, you're looking for chapter 15 in Reading Success in the Early Primary Years. I've also talked a ton about data in general here in the podcast and its role in helping us evaluate the impact of our practice. In this episode, we're going to dive into an aspect of assessment that will help give you clarity and guide your efforts to strip back the assessment schedule to only what is really needed.
The key to determining what assessments to include in that yearly schedule is to know what questions you're trying to answer. This is also part of the picture in helping to grow our capability in using data effectively. And this, incidentally, is another of the common goals that schools have.
So, what are the questions we're trying to answer with assessment?
We're all familiar with the idea of short-term and long-term data and diagnostic, formative, and summative assessment. We've heard it talked about since University. And the funny thing is that while we've been hearing about this for years, most schools and most teachers still grapple with it. I think part of the reason that this is so difficult is that we have been so used to doing assessments because someone told us to, and so used to using assessments in particular ways that don't really connect to evidence, that we're now having to rethink a lot of what we thought we knew about assessment tools and their value.
Who amongst our students is at risk?
Let's begin with the first and most important question we're trying to answer. Who amongst our students is at risk? This is where normed universal screening comes in. It's critical that we identify which students are most in need of high-intensity instruction. In many schools, this is left up to teachers to decide based on their impression of students. The impact of this is that the students referred for support are often the wrong students, which leads to an inefficient allocation of precious resources and students who actually need support not getting it. And this comes about not because teachers aren't intelligent, but because there are lots of false positives that can come up when we think about how confident a student looks or how well that student is able to articulate verbally. Now, a normed screener isn't necessarily the only tool we can use to identify students who need additional support, but it is an important part of the assessment schedule.
How effective have we been in our literacy instruction?
The other question that our normed screeners help us answer is how effective have we been in our literacy instruction? Tightly targeted instruction that's doing what it needs to will result in a reduction of the number of at-risk students and an increase in the students who are at or above benchmark. So it's not just about saying yes, we do intervention, it's also about saying how impactful is our instruction across the school? If we're not asking that question and acting on the answers, then we're never going to get where we want to go.
There are a few tools around these days that help us measure things like oral reading fluency, but you actually don't have to start with a heavy-duty assessment straight away. So if your school is currently using a benchmark assessment tool that uses levels such as the PM benchmarking or Fountas and Pinnell, and trust me, there are lots of schools out there still doing it, they're very new into this space, and if that's you, I don't want you to feel like you've done something wrong. Change is hard, and so to make this change, one of the things you can do is to just take any grade appropriate text and the Hasbrouck-Tindal Fluency Norms and use those things to make that small change in the right direction. So these norms will tell you how many words students need to be reading correctly per minute at the beginning, the middle, and the end of the year. And what is a grade appropriate text? Well, that's a bit of a tricky question to answer. But at the end of the ACARA literacy general capabilities document, there's an appendix which gives you a description of what those texts need to include at different grades for different students. It's worth mentioning that the normed measures of oral fluency don't begin until Year 1. And in fact, the Hasbrouck-Tindal norms from Jan Hasbrouck and Gerald Tindal, they don't kick in until the middle of Year 1. And this makes a lot of sense to me because students won't have learned enough code before that point to be able to tackle the texts, which contain a whole range of alphabetic code, they're not decodable.
Decodable text assessments
I want to take a bit of a sidestep here and talk about decodable text assessments. There is an increasing range of decodable text series that comes with text-level reading assessments. I'm yet to be convinced of the usefulness of these assessments. One of the common uses is to determine which text students can read. So teachers feel hamstrung because they think I can't choose books for my students to read because I haven't tested them. And I worry about this idea because there's a real danger of teachers staying stuck in the idea of levelling when it comes to choosing instructional materials. Another implication of these assessments is the time that they take to complete. When they're not normed, they're not being tested to show that they measure what they say they're measuring in all of the aspects.
If our students are reading decodable text daily, teachers will have ample opportunity to observe their reading and take some brief notes about how students are progressing. There's no reason that you can't focus on two to three students each day to check in with. If this practice is established, there's no need to take an extra 15 or 20 minutes per student to listen to them read one by one.
So these tools feel good because they look like what we're used to. But in their use and in relying on them, we could actually be holding our teachers back in building their capability. The other problem is when we rely on a test to tell us what books students can read, it makes it virtually impossible to use a large range of text, which is a really good thing because we don't have a test for all of those. So learning to choose which books the students can read based on their current skill and knowledge is a much more productive use of time.
Measuring the skills and knowledge
Question two. We have to look at that next layer of assessment which is about measuring the skills and knowledge that lead to success in the universal screening. Yes, we can get some of that information from the universal screener, but there are other sources of that information. So let's begin with the early years. And predominantly, this is about phonemic awareness, phonics, and early morphology.
In the early years, we look at whether students have learned phoneme-grapheme correspondence and are blending effectively. This is usually conducted once per term in an interview style assessment. It's really important that this assessment is aligned with your phonics scope and sequence because this is also the assessment that will help you know that you're teaching the right content to the right students. It also acts as your diagnostic assessment to determine which gaps need to be filled for each student. If your school is using something like DIBLES, yes, there is a correct letter-sound score. But what you're testing in the correct letter-sound score is mostly basic code. You're not testing the full breadth of the complex code and looking at whether the students have achieved understanding of the full alphabetic principle. So the test that comes with your phonics program is really important.
The questions we're answering for all students with this assessment are:
Has the content we have taught stuck into the medium to long term?
Did the thing we taught in week one stick until week ten?
What gaps do our students have in their phonics knowledge that we need to actively fill? That is so important if we're going to have universally strong reading and spelling outcomes for our kids.
Are our students making appropriate growth in phonics learning to move their reading forward? This question is critical because if students aren't learning somewhere between eight and ten graphemes every term of the early years, they simply aren't going to be where they need to be by Year 3.
And remember, we're not talking about how many graphemes you've introduced, we're talking about how many graphemes the students have consolidated.
How are our students progressing with blending?
Finally, how are our students progressing with blending? There are several milestones in blending that students achieve on their way to word recognition, which is just being able to say the word when you see it. You can read more about this on page 186 of my book, Reading Success in the Early Primary Years, if you have a copy. Students progress from sound-by-sound blending to automatic word reading, and tracking this development is essential so that you can see where more intensive support is needed.
Has what we taught stuck in the short term?
In between the once-per-term phonics assessment, the priority becomes answering the question: Has what we taught stuck in the short term? Can the students remember what we covered last week? Now, this assessment I call a check-in. It's not a full assessment. We're checking in, it's a medium-term check for understanding, if you like. And it can be done once per week and does not involve sitting each child down one at a time. Instead, physically spread your students far apart in the room and have them write down the graphemes you taught last week and any that you know they've been having trouble with. Having students write the grapheme when you say the phoneme is an excellent way to determine whether students are retaining what you've taught. It also allows you to shorten the feedback loop so that you can act immediately if new content isn't sticking. After all, there is little point in continuing to teach new content if what has been taught hasn't stuck. Because if you just forge ahead and you just follow the pacing guide in your phonics program, and incidentally, we don't have one in ours because we want you to make decisions based on where the students are up to. If you just forge ahead with that pacing guide, there's a really good probability that you will have students who at the end of the term and the end of the year have significant gaps in their learning. We have to be watching what is happening as we're teaching. And then of course you'll be checking for understanding within lessons to see if what you have taught has stuck in the moment.
Before we move on to upper primary, I will also mention that proper assessment in phonological and phonemic awareness should be conducted with all students in Foundation, at least, to identify those students who aren't progressing as they should be. So, yes, there's blending and segmenting in DIBLES, but it doesn't cover the full range of skills that need to be developed. Once students are blending and segmenting with confidence and you've seen that they can perform the other skills, you can just pull right back on that. If they've got it, don't keep testing them. But it's an important part of early years assessment and shouldn't be overlooked, particularly for students who you know have some struggles.
Upper Primary
Let's change course and talk about upper primary now. So in upper primary, Year 3 to 6, we will complete the same universal screener as in the early years. Obviously, there are different tests for different grades. We also need to look further into the knowledge and skills of students to make sure that they have what they need to succeed in oral reading fluency. Now, I won't go into a lot of detail in this episode about looking into upper primary skills and knowledge, because I've recorded a few podcasts, including Season six, Episode 12, When Repeated Reading Doesn't Work, that helps us understand four reading profiles and the needs of each one, including how to determine the knowledge that students have about how a language works. And we're basically talking about phonics to start with.
When it comes to assessment, we can learn so much from a whole class spelling test. And we have these tests freely available on our website to help you work out which of your students have the phonics and early morphology they need for strong reading. When it comes to spelling, things get a little trickier. As much as we all wish that we had one, there just isn't a normed reliable assessment for spelling that gives us corresponding information that we have for reading. There isn't a test that says this student is where they need to be up to for their age in spelling, in phonics, orthography, and morphology. In the absence of a normed screener, the best we have is a diagnostic assessment to help us figure out what students do and don't know.
Sitting in the same space as vocabulary
Now, there are some reliable assessments that tell us what students know when it comes to phonics, and they're diagnostic in nature, and I've just talked about them. But when it comes to morphology that sits beyond inflectional morphology, that goes beyond the past tense 'ed' and the plural 's' and the 'ing', we don't have anything nearing complete effectiveness. Part of the reason for this is that derivational morphology, which is everything beyond those basic eight suffixes that we teach usually in the early years. When we go beyond that point, we're getting into the realm of sitting in the same space as vocabulary. So we don't expect to have a vocabulary test that we give to all of our students because vocabulary is unconstrained. And a lot of the knowledge that comes with morphology is also unconstrained. So I'm sorry to tell you, there isn't a test that will tell you where your students are up to in morphology broadly before you begin teaching. The best we have available at the moment looks at the very basic levels of suffixes and suffixing conventions.
Now, there was supposed to be a normed prefixes and suffixes assessment released a year ago, but at the time of recording in October 2025, it's still not out yet. In the meantime, the assessments we have determines whether what we have taught has stuck. So know that if you are using our Spelling Success in Action program, which leans very heavily into morphology, that there is so much learning that comes out of that. It's not just about spelling, it's about vocabulary building, it's about comprehension because of the vocabulary building, it's about reading accuracy when it comes to multimorphemic words. Know that the assessment that you do each week with that gives you that short-term data. So it's the same as phonics, it's important that we're testing and checking at the time of teaching. We also need to be checking in after a period of time so that we can see if review has been effective. And for us, that sits in that fifth week of no new content, that consolidation week, where you can check in on what you taught four weeks ago and three weeks ago and two weeks ago, so that you can determine whether what you have taught has stuck.
Getting the balance between rigour and time management right in assessment isn't always easy, but if we know what questions we are trying to answer through our efforts, we can make sure that we aren't wasting time in assessment that doesn't help instruction. Yes, there are system requirements like the phonics screening check that are important and we know about NAPLAN, but at the school level, we have quite a lot of choice.
Clear Questions
When optimising your assessment schedule, start with clarity about the questions you're trying to answer. Firstly, identify who is at risk through normed universal screening completed at the start, middle, and end of the year. Second, measure the skills and knowledge that lead to success through aligned phonics, phonological awareness, and morphology check-ins conducted when they're done. Remember that not every assessment needs to be done one-to-one. Whole-class spelling tests and quick graphene checks can tell you what you need to know efficiently. And finally, let the questions you're trying to answer drive your assessment choices, not the other way around. Resource Room members have access to assessments and professional learning that helps them understand how to use the instructional materials we have to respond to student need and achieve the outcomes that we're looking for.
And remember, the only acceptable outcome of all of the work we're doing is that every student is learning to read and spell with confidence. That's it. If we do not have every student succeeding to achieve their best, then we are not done in the work we do.
The last discussion point I want to leave you with is this. When it comes to assessment that sits outside of what I've discussed in this episode, let's think about a few questions that can guide us in making decisions. The first one is:
- Does the test measure what we think it's measuring?
- Has this been confirmed?
- Has this been reviewed to determine that it is actually measuring the thing that it says it's measuring?
Is a multiple choice comprehension assessment really measuring comprehension? Considering what we know about comprehension and knowledge, I'm going to reiterate what other people in the field have said in that any test involving asking students a range of questions about an unseen text is at best a knowledge test. And those multiple choice questions? I've never been particularly confident that they're reliable. Some students can use their background knowledge and their capability in language to make really good guesses and give us lots of false positives. So I would be seriously questioning the validity of some tests that are widely done.
Which activity will give us the most value?
Engaging in assessment involves prep time, practice time, and the actual time to do the test, including the reallocation of people to help, particularly in the early years. Is the time spent in this assessment going to give you more benefit than if you just use that time for actual teaching? If what you get from doing an assessment is some slightly interesting data that you're never going to think about again, well, maybe we can think about that spending that time may be better off for the students in learning rather than assessing that gives us data that we don't use.
Does the assessment inform our teaching?
If doing an assessment just gives us some kind of score without connection to the curriculum, without giving us clarity on what to do next to further the student's journey, where is its value? What are we getting out of this? Are we getting out of this that we can say, look, we have an assessment schedule that has things on it that other people recognise? Or are we getting actual information out of it that helps us to do our core work? If the assessment is a nice-to-have exercise, then it's probably time to have a rethink. Let's hold space for the assessment that is, we cannot teach without this assessment. That makes it a must-have.
And we're going back to question two. Is there more value in doing an assessment and looking at the data or in taking that time and using it to deliver instruction? These are hard questions. And I understand completely that I'm challenging some established thinking in many, many schools. But time is precious. Teachers' time, leaders' time, and most of all, students' time. So let's make sure we're using it and maximising every minute.
And that all sounds a bit heavy, but I want to leave you with this encouragement. Assessment doesn't have to be overwhelming. When you're clear about what questions you're trying to answer, you can build an assessment schedule that gives you the information you need without drowning in data that you're not going to use or stealing precious instructional time. I know that Mildred comes in and says, "Well, if you don't do the assessment, you're not doing this reading business right." Well, Mildred, go away. We're not listening to you. And if you haven't heard about Mildred before, it's not an actual person. It's the voice in your head that tells you that you're not getting it done. Also called imposter syndrome. Give yourself permission to allow for common sense and work together as a team to define what assessment can and can't do for student outcomes. Focus on what matters, identifying students who need support, and ensuring that what we're teaching is actually sticking. Everything else is just noise.
Thanks so much for sticking with me to the end of this episode of the Structured Literacy Podcast. I hope it gives you some clarity to focus on what truly serves your students and your school community. Keep listening until after I say goodbye and have a little dance party with a very fun song that we include at the end of every episode. So don't press stop yet. Keep listening. Until next time, happy teaching, everyone. Bye.
Show Notes:
Reading Success in the Early Primary Years
Hasbrouck-Tindal Fluency Norms
ACARA literacy general capabilities
S6 E12 - When Repeated Reading Doesn't Work
Looking for assessments you can use with your students? Join us inside The Resource Room!
0 comments
Leave a comment