BECOME A MEMBER! Sign up for TIE services now and start your international school career


Creating a Culture of Data-Informed Conversations

By Richard Harrold
Creating a Culture of Data-Informed Conversations

I was recently asked by a "MAP school," a school using the Northwest Evaluation Association’s (NWEA) Measure of Academic Progress (MAP) tests to advise on how best to structure data conversations at team meetings. The school leader who contacted me told me that typically, teachers at a grade level meeting will pore over a mass of MAP data (usually class level or grade level reports) and may have a decent level of understanding about what the charts mean. However, the meetings often fail to result in any useful takeaways and actions. They tend, instead, merely to confirm what the teachers already know.

I asked a few further questions about the school and the culture of discourse about learning that was developing there. The more we discussed this, the clearer the school leader became regarding the misconceptions some of his teaching team had about the expectations the school had about how they would use MAP data. He had hoped and assumed that the teachers would quickly arrive at a common understanding about what the MAP data was telling them about learning. He also assumed that they would triangulate the MAP data with other assessments (including informal, ongoing assessments, and classroom observation) and develop strategies in their mid-to-long-term planning around individualized goals that were oriented to the learning continuum.

All of this is possible using MAP data, and much of it can be helpful. But that doesn't mean that doing it is always the right thing for every teacher or every student. Even before the teachers this school leader had in mind sat down together to discuss their classes' MAP data, there needed to be a clear understanding of what the school's expectations were for teaching teams using data to inform learning discussions. To this school leader's credit, he was quick to admit that no such clear understanding existed, and his first priority was to build a culture where meaningful data conversations were supported and nurtured.

Many of us find out this lesson the hard way. I recall awkward silences in the team room when, as a new principal with MAP as our school's adopted tool, I sat down with a fourth-grade team and we projected a volunteer teacher's Class by Rasch Unit (RIT) report (as it was then called) on the interactive whiteboard. The meeting had actually begun quite well. There was palpable excitement about this new data tool we had. MAP provided us with a new depth of learning data, and the fourth-grade students we were supporting had sat the MAP tests only days before the meeting, so there was an unprecedented freshness to the information we were looking at. Previously, the school had used the Iowa Test of Basic Skills for its annual external assessment. The students sat the tests in May and the results were posted back to us around September, at the beginning of the following school year. I doubt many teachers learned much from them, and they were almost universally regarded as nothing more than a summative assessment for the year.

On the other hand, here, in this fourth-grade team meeting in the heat of a Mexican October, the teachers were looking at data from students they would continue to be teaching for the next eight months of the school year. The only problem was, as the awkward silence demonstrated only too clearly, we didn't have much of a culture of data-informed conversation. The MAP test gave that team an enormous amount of information, and I realized I would quickly have to create some guidance around how to use it or my teachers would drown in the data deluge!

I began with the reports. At the end of the school year, parents would be receiving the individual student report, which would show growth between the fall and the spring terms. It was important, therefore, that teachers were familiar with that report as they would doubtlessly be asked about it at parent information meetings and student-led conferences. Setting that expectation (while there was still plenty of time before the parents saw the report) was an easy step. But there were still dozens of reports that could be generated for various purposes, and teachers deserved to know which of these the school would be expecting them to understand and use. I consulted with colleagues in other schools who were using MAP and with our account manager at the NWEA. I decided that teachers could benefit most from understanding and using the class breakdown report and the grade level report (which were very similar). It's worth noting that although I am describing the reports that were available over ten years ago, all three of these reports are still offered (and have been substantially improved), and all three are still extremely useful.

As the term progressed, I learned more about how useful these reports were to teachers trying to construct and mentor individualized learning with students. But I also learned more about the value of building a culture of data-informed discussions. Team meetings generally included shared data discussions, and more often than not, it was a MAP report at the center of the chat. Typically, we might put a class-level or grade-level report on the screen and invite anyone to comment on something they noticed. Human nature being what it is, it is usually the outlier information that attracts notice first, so the conversation might begin with someone noting that a particular student was either a band or two below or above his or her peers. If students in this situation are known to the teacher and are receiving appropriate support already, this might not be a concern but if the finding is surprising, that might prompt action. MAP data quickly became part of the toolkit used by special education needs teachers and enrichment program teachers. The shorter (20-item) MAP tests found a place among the resources used by teachers in both these departments. Occasionally, such tests were used in the admissions process to determine placement (though not, at our school, to inform enrollment decisions).

As teachers became more familiar with the reports, we found we could introduce additional tools such as the quadrant reports and goal setting. The NWEA developed whole suites of new products, some of which we adopted and some of which we left to teachers' individual choice. After the introduction of web-based MAP (by which time I had moved on from Mexico and joined my present school), I found the time spent on supporting and developing teachers in data-informed discourse had paid huge dividends. Teachers who understand the MAP process make more effective proctors and are better able to support students who may be new to the process. They can speak knowledgeably and reassuringly to parents who may have doubts about assessments they perceive as summative (which MAP is not designed to be).

They can also lead the data-informed discourse strategy. One teacher suggested we track response times as part of our investigation into why some students appeared to be stuck in the bottom right-hand quadrant. This is the quadrant in certain reports where you find students with high achievement but low growth. They appear to be the ones who, to use the words of one of my colleagues, "have the smarts but are coasting."

We found response times in this quadrant to be much lower than those of students in the upper right-hand quadrant. One student finished her test in just thirteen minutes when the average time for her class was well over 45 minutes. Bear in mind that MAP tests adapt to the test-taker's responses. If this student was answering questions correctly, the questions would increase in challenge and require more time and concentration. She was clearly answering questions as quickly as possible just to get the test over with. She knew her final score would be respectable enough. She wasn't interested in being challenged. This report allowed us to dig deeper into the circumstances some particular children in that grade level were experiencing. We could introduce interventions that went beyond academic help and supported students in other ways. This was an extremely powerful outcome, and education is ultimately about outcomes. The impact we have on the learner is the only meaningful measure of our input.

We introduced MAP testing as a tool to support individualized instruction and to enhance academic progress. Yet in this instance, we were also able to use it to inform and support interventions in wellbeing, a different but linked part of the school's curriculum offer. I doubt we would have been so effective (and certainly not as efficient) had we not spent time developing a staff where data-informed discourse was part of the supported expectation.

Richard Harrold is currently an administrator at ACS International Schools and a former school district coordinator for MAP.

Please fill out the form below if you would like to post a comment on this article:


01/08/2023 - Paula
Thanks for re-sharing this Richard, I missed it the first time around. We are currently doing a deep dive into how we talk about and use the myriad data we collect; we're using the Harvard EDx DataWise course school-wide, which is proving helpful as we work to develop a common language, demystify data and focus on just one small piece of data/idea at a time. This article was very helpful in clarifying my thinking and a great one to share with our Data Leaders! Thank you!



Roadmap to Action
By Jaya Ramchandani and Cary Reid
Sep 2023

On Learning, Leading, and Loving in Complex Times
By Tim Logan
Nov 2023