S6 E10 - Before You Commit to a Program, Listen to This

Hello, hello, welcome to this episode of the Structured Literacy Podcast, recorded right here in Tasmania, the lands of the Palawa people. I'm Jocelyn, and today we're talking about something that's on the minds of many leaders right now, which is choosing a program. If you know my work at all, or you've been listening to this podcast for a while, you know I'm not anti-program, in fact, I make them. And I have to be really honest, I have thought perhaps I shouldn't be, perhaps I should just focus on professional development work. But I know that in order to really support teams, we need to give them tools that get them maximum output, and our programs are written with cognitive load in mind from the very start. So I've decided that I can't step out of curriculum, even though that was never my goal in moving into this work in the first place. But this podcast is not just about if you use our curriculum resources. The Structured Literacy Podcast is a podcast for everybody, whether you choose our tools or someone else's. So, with that in mind, let's continue. Programs can be incredibly valuable tools that support instruction, provide consistency and help build teacher capability. But here's what I want you to understand. The process of selecting a program is just as important as the program itself.
Today, I want to share some thoughts on what leaders can watch out for when considering a program and how we can position ourselves as quality providers through thoughtful decision-making. And when I say we, I mean leaders, and we, I mean educators. We are all providers of quality instruction, that's what we should be, and we have to have the thought processes and the planning structures to make that a reality. So what's our starting point? Before we dive into the specifics, I think we need to be really honest about where we're starting from.
There are generally two scenarios when schools begin looking for a program. The first one is this we don't really have a strong sense of what quality instruction looks like and we're getting a program to help us. This one is actually a very common and completely understandable position. We know our students need better outcomes. We all know that what we're seeing isn't nearly good enough. We know something needs to change, but we're not entirely sure what excellent instruction looks like in practice. The second scenario is we do know what quality instruction looks like and we want something that aligns with our understanding and supports our goals. To be in this second place, we really have to have a strong understanding of the principles of instruction, of the general characteristics of instruction that works. It's not about saying I used to use XYZ program and I want something that looks like that. It's a really tricky space to be sitting in and it's one that is challenging, and I've got some supporting suggestions for you today. Both of the scenarios are valid starting points, but they require different approaches to program selection. What we need to be aiming for is research-informed practice, and you may have heard me speak about this view of research before. Research informed practice sits at the intersection of three critical elements, and this version comes from here in Australia from a social work perspective. We're looking for research findings first and foremost. We're not looking for research that matches our ideology. Our question needs to be what does research say about the development of this skill? The problem with research is that it's very often conducted in small group settings that don't reflect the context of our schools. They also don't reflect our professional knowledge about how to engineer learning, because we are in the business of engineering learning for a large group of students with diverse needs. Intervention-focused research often has three or four children sitting around a table being delivered by a skilled, knowledgeable researcher, and the same goes for models developed on intervention as a starting point. So we need to look at, yes, what the research shows, but we also have to consider our professional knowledge about this engineering for a large group of students. The third part we're looking for here is the positive impact on the students.
When we adopt this research-informed model, we must evaluate the impact of our work on student outcomes, both academically and from a well-being perspective. Student outcomes become the evidence of success. Success isn't whether or not we've implemented a program with fidelity. It's not what teachers feel good about, and it's also not what somebody tells us it should be because we've ticked the items on a checklist. Student learning is the only acceptable outcome from any effort. So while we're saying yes, we must respond to research, we must layer in our professional knowledge. If our student outcomes are not there, something has gone awry and needs to be addressed. So remember the goal isn't to buy a program, it's to improve student learning. Everything we evaluate should come back to that.
Here's something else we need to talk about, if we go all in, spending significant money and time on something that isn't going to work for our context or for our students, we've potentially spent all our resources on an approach that isn't serving us. And the hard bit in determining this is asking the question and answering the question, well, if this thing's not working, is it us or is it the program? It's really easy to blame the program when things don't go as planned, and that's a natural response when we're very first learning.
It's also really easy to adopt a "wait and see" approach. Now, generally in instruction we call this the "wait to fail" approach. We wait to see if the student will catch up, we wait to see if the program will start working, we wait for things to improve, and often this waiting is for two or three years. We say in three years, when we get our NAPLAN results back, we'll be able to determine whether it's worked. Now, all of this waiting comes at a significant cost to student learning if we're not hitting the mark on instruction.
So programs and our approach to adopting one and implementing one, should come with short-term feedback mechanisms so that you can identify whether implementation has been successful and is on track, because without these short-term feedback mechanisms, we're flying blind and we're wasting whole terms and whole years in things that aren't hitting the nail on the head. And this is exactly why, with our programs, we now embed coaching support throughout the implementation process, but I'll tell you a bit more about that later. We'll get better outcomes when we select a program to evaluate against a set of fundamental criteria that acts as the minimum standard for our first steps. What we often do, though, is we go to a Facebook group, or we go to the school down the road, or we go to some other social setting and we ask what program works. This is often really unsuccessful, because the people who are sharing their opinion are doing so because they like what they have found. Doesn't mean it's the most successful. It just means they like it and they feel good about it. So we have to dig deeper.
Timothy Shanahan discusses three areas to focus on in school improvement. There's more than three, but these are the three that relate directly to the classroom and that have the biggest impact on student outcomes. He talks about time on task, whether we're teaching the right content to the right students at the right time, and the quality of the pedagogy. So if we're spending 45 minutes on the basics of a phonics lesson, we're not enabling enough time for the other elements of literacy development. So time on task will be off. If we are not being data informed about how we make decisions about what to teach, if we're treating every year one or two student as if they're the same, then there's a really good chance that we will not be teaching the right content to the right students at the right time. And then there's the quality of the pedagogy, which so much of comes down to human cognition and what we know from the cognitive sciences. So if the programs we're using do not support working memory, particularly for our strugglers, we're in trouble from day one. So before we spend time and money committing to a program, we need to know what we're looking for so that our work will be successful.
I've previously shared here on the podcast about a NICE framework, and so when we're making decisions, this little framework can help lead our thinking and guide discussions. The N in the NICE framework stands for need. Do we need this thing? How do we know? What in our data tells us that this is needed? We can't make good decisions based on gut feelings or what seems like a good idea or what reflects what everyone else is doing, or what we think they're doing, we need evidence to know where the area of need is. Otherwise, we don't know what problem we are trying to solve.
The I stands for impact. What's the impact we expect to have? What will tell us that this is successful? Often, what that sounds like is, we are successful because lessons look like they're being taught the same way, we are successful because we've implemented the program, we are successful because in our leadership walkthroughs we are seeing that the lesson steps are all present. Again, the goal's in the wrong spot. We need to know what we're looking for in student outcomes as well as all of these other things, because if we can't articulate what success looks like, we won't know if we've achieved it.
The C stands for capability. Will this program or approach help build our capacity, or will it lock us into lessons and permanently remove our capacity to make decisions? Now the goal should be to increase teacher expertise, as we support them and support the whole school approach. We're not looking to replace teacher expertise.
And the E stands for ease. Now, nothing is ever easy in school land, we know that. But in order to adopt a program or an approach, will we have to turn our school upside down, spend our whole resource and professional learning budget on this one thing for one area of curriculum, or place ridiculously unnecessary pressure on our team? If the answer to any of these is yes, then rethink what you're doing, because implementation needs to be sustainable, and I'm seeing approaches that look like we want to tick the boxes on explicit teaching and low variance so whole groups of schools are being required to change every single curriculum area to a scripted or highly guided program, and we're not just approaching a literacy goal here, we're changing everything that we're doing all at once. So ease is not about being easy and being lazy and taking shortcuts. It's about responding to the cognitive load needs of our team and our school and our students.
Now I'm about to say something that may sound strange, and if you've been with me for a while, you know sometimes this happens. Most people think that the goal is to get a program, and I've already said here that that's not the goal. It is the goal for most program providers, though. Whether they're free or paid, their goal is for as many schools as possible to be using their resources. But the goal of getting a program is a very different thing from the goal of students learning to read and write. Now they should be the same. You should be able to be on the same page as whoever's providing the resources to you. And I'm not questioning people's motivation, I'm not saying that, hey, people are selling things or giving away resources knowing that they are not going to be useful. That's not what I'm saying. What I'm saying is that, if we really get to the heart of it, we have to know what we're talking about.
Focusing on getting a program means we get a quick win, we tick a box and move on. Focusing on students learning to read and write is a much longer term endeavour, and when we choose this goal, we're choosing the hard path, we're choosing the uphill climb. It also means we're choosing to commit to the moral imperative of the work. The goal isn't to buy a program. The goal is to improve student outcomes, not just a little bit, but so that every student is succeeding. Everything we evaluate should come back to that. So what should we be looking for? Here are some not negotiables, probably not an exhaustive list, but it will give you something to start with.
You're looking for short feedback loops that allow teachers to notice when things aren't going to plan and adjust. If a program doesn't tell you whether students are learning until the end of a term or semester, it's far too late. Wheels have fallen off in week two of the term and we've just wasted seven or eight weeks doing teaching that isn't hitting the mark. So we need short feedback loops.
We also need ongoing support and professional development baked into the program. Two days, three days, however many days of training is not ongoing support and professional development. We can also think about it like this, does the program help the teacher learn to make decisions or does it simply provide scripts to follow? Now scripting, if I have another podcast episode about, can great teaching be scripted? Scripting absolutely can have a place when we are first new at something, but no program developer knows the responses of your students in your classrooms at any given time. So there needs to be a partnership between the teacher, the school and the developer. We really have to have partnerships for success. So has the package been structured as a partnership for success or is it a bit of a wham bam thank you, maam approach where you do the training, you get your stuff and then you're on your own? This is one of the most critical factors from a school improvement perspective. Someone who is invested in your success will be there for you when things don't go to plan, because I guarantee you at some point they won't. And here's what I really, really need you to understand. If we do the training and have good systems in place, implementation is likely to go well initially. The real threat to student outcomes comes when something about our context doesn't reflect what the program developer had in mind and, let's face it, that's all the time. Because no program developer came and wrote you a bespoke program. The wheels fall off instruction when it's time to make decisions for our context and our students and when we don't have the support to do that effectively. And this is why ongoing implementation coaching is so critical. It is not just about the initial training, but about the sustained support when you're making those contextual decisions that determine whether the program will actually work for your students. And why don't more providers do this? I'm going to tell you, because it's not as profitable as the model we currently have.
We need programs that build teacher making in. Does the program and the support that comes after the training focus on helping teachers learn to make decisions or does it engineer development out of the picture?
Supporting working memory is also a must. Has the program being written in a way that prioritises supporting working memory? Does it reflect things like Rosenshine's principles and other frameworks for learning? You'd get a sample lesson and evaluate it against Rosenshine's principles. And when it says that one of the principles is we break things down into small chunks and we teach them one at a time, we have to know what is a small chunk. What is working memory capacity? Well, I'm going to tell you, it's two to three, and not that long ago, John Sweller presented live and I got to hear him and he said oh, three, I don't know. We're talking about kids, so we're going to go two. So if the program or the lesson is asking students to focus on more than two things that they don't already have to automaticity, that's the whole box and dice for most of the kids, particularly for the strugglers. So we have to define what it means to support working memory and evaluate through that lens.
This leads into appropriate cognitive load. Does the program intentionally limit how much new information is presented at once and give students the opportunity to practice and embed new learning in context, or does it try to cram in as much as possible in a shorter period of time as you can?
Student engagement is another factor. Are students provided with many (in capital letters), opportunities to respond and engage academically, intellectually and sometimes emotionally. Or are the students taken out on a tour of content with the occasional request to join in? I do something, you do something. I do something, you do something (Thanks, Anita Archer), with not just repeating or saying things with me, but doing heavy lifting and thinking in a way that doesn't overwhelm cognitive load. That's what you're looking for in student engagement.
We're also looking for differentiation mechanisms. Does the program provide the mechanisms you need to consider the needs of the students in front of you, or does it treat every year one student or year four student in exactly the same way? There needs to be consideration of where students are up to and what their specific next steps are, particularly in the development of foundational skills. Bottom of the rope, bottom of the model, these foundation skills are constrained and they need to be taught to mastery before we move on. So we have to think carefully about what it means to differentiate, but that's a whole 'nother podcast episode and I'm sure I've spoken about this before. Top of the rope, text-based units, one in all in, and we're providing adjustments so that everybody can access the same content which comes directly from the curriculum. But in the development of the early steps of foundational skills, we need to be targeted, and that's where you get the best outcomes.
Is the program designed for struggling students? This is something that I consider to be critical because it speaks to my moral imperative as an educator. Have the materials been written with the struggling student in mind? There are many ways to stretch students who need enrichment, and we should be focusing on them too, but it's actually really difficult to make content suitable for struggling students if it hasn't been written for them in the first place. You basically have to rewrite it. A program of instruction needs to maximise every instructional minute, be interactive and maximise student engagement. Now the core here is optimising intrinsic load. That's what we're looking for. And let's come back to that central question: Will the program improve student learning? And that's the lens through which we need to evaluate every single criteria I've just outlined.
I wish we could believe the labels on things, but we have to be critical consumers. Whether the resource in front of us is free, low cost or pricey. We can't believe every claim made by everyone, and we know that every man and their dog is just plonking "evidence-based" and "science of learning" on things. So we have to evaluate for ourselves the impact that these tools will have on our students. Marketing materials will always present the best possible case. No one is putting a testimonial on their website that says, well, yeah, it was ok for some students but didn't meet our needs for others. So testimonials are always delivered by the people who got a really great outcome.
We have to dig deeper. There is a difference between the price of a resource and the cost of the resource. The price is the dollar figure that you pay for the physical elements and training, the cost is measured in so much more than dollars. There's the cost in time, the cost in stress, the cost in lost opportunity for student outcomes, which is the biggest and most serious cost of any decision we make. And this brings us back to our central question again: will this investment, both in dollars and in opportunity, lead to improved student learning?
Now, what do you do, though, when you're not sure? Well, making the decision to adopt a program is easier when you have previous success in getting strong student outcomes for every student, but how do you make decisions when you aren't sure about the answers to the questions that I've asked in this episode? How do we feel confident when we don't have experience in getting results for every student? Firstly, I'd suggest that you avoid asking for people's opinions on programs in Facebook groups or in any other social media platform, because you'll get lots of people's opinions, but not necessarily evidence-informed guidance.
Second, it's important to know exactly what you're looking for. In season three, episode 22 of this podcast, I shared a list of criteria for choosing a phonics program. There is a downloadable tool that you can use to audit the options in front of you, and you can also use it to audit your current practice. So this will help you identify the strengths and areas of opportunity, because no program does everything and we have to know where our programs will hit the mark and where we're going to have to supplement. You can find this on our website at jocelynseamereducation.com and just in the search bar, search for Choose a Phonics Program. Many of the points of consideration for phonics programs also apply to other areas of instruction as well, so there's no harm in downloading the free tool and having a look and seeing whether you can use it for different purposes.
Third, make sure, when you are considering any program, you have the opportunity to evaluate a sample lesson or lessons and, ideally, try them out. Don't commit to something that you haven't been able to see in action. You don't buy a new car without taking it for a test drive. I would suggest that the same needs to be available for our programs. Yes, you'll be a bit clunky in teaching it because it will be new to you, but you should be able to get a general sense of how this thing rolls and the responses of the students, knowing that things always get better with time as we develop fluency with something. Finally, the ideal situation is that you can begin with a simple start and then build up over time. This allows you to evaluate effectiveness before making larger commitments.
Choosing a program is not just about finding something that looks good or feels right because other people are using it. It's about finding something that will genuinely support your students' learning and your teachers' growth. It's about ensuring that every dollar spent and every minute invested moves you closer to your goal of student success, and this is why we're moving to an approach where implementation coaching is a required part of our programs, and we're beginning this with Spelling Success in Action for years three to six and beyond. Leaders will have live personalised support as they collect data, analyse that data and decide on a strong starting point. For now, schools will have access to very reasonably priced ongoing support from an experienced coach who knows how to get results. We're going to build this up over time and we have some very exciting new developments coming, but that's where we're sitting, so we can ease people in to the new way in which we will be working.
I said before that most providers don't go down this road because it's not the most profitable way to do business. It's just way easier to sell the program, sell the training and leave you to it, but we all know what leads to success. We also know what research tells us about strong professional development that really builds teachers' capability in meeting the needs of their students, and it's not short-term training, handing over decision-making to someone else or leaving schools to go it alone. In the previous episode of the podcast, I unpack what research has to say about the characteristics of high-quality professional development, so if you're curious about that, you can have a look and have a listen.
When you are making decisions about your next instructional steps, make sure that you and your students are set up for success. You might choose our curriculum resources. You might choose another provider that's free or paid. Whatever you choose, make sure that it's right for your school and will lead to student success. Remember you're not looking for perfection, that doesn't exist. We're looking for a program that aligns with what we know about how students learn, supports our teachers in making great decisions, provides mechanisms for us to know whether it's working and includes the ongoing support to help you navigate the complexities of your unique context, and does so in a way that doesn't take your entire budget. The process of choosing a program is an opportunity to clarify our values, our understanding of quality instruction and our commitment to student outcomes. When we approach it thoughtfully, with clear criteria and realistic expectations, we set ourselves and everyone around us up for success.
That's it from me for this episode of the Structured Literacy Podcast. Remember you won't break the children, it's going to be ok. But you also have to make sure that you don't break the grown-ups either, especially yourself. Thanks so much for listening. I know these decisions are hard. You've got this. Happy teaching, bye.
Show Notes:
S3 Ep22 - How to Choose a Phonics Program
S6 E9 - From Wasted Hours to Real Impact: Research on PD That Works
If you'd like to find out more about our research-informed professional learning, click here.
0 comments
Leave a comment