A curriculum specialist says yes, and here’s why. 

GUEST COLUMN | by Cindy Jiban

Oral reading fluency assessment is now commonplace in the primary grades. This represents a significant step for data-based problem solving in education.

However, given some of the side effects that have emerged, it is clear that fluency assessment is due for significant structural improvement.

With technology, how can we evolve fluency assessment to a place of greater relevance and efficiency?


Decades ago, a brief and research-based approach to tapping reading progress emerged: plotting how a student’s number of words correct per minute (WCPM) climbed in response to different instructional approaches (Deno, 1985).

By the late 2000’s, these procedures became central to tiered support models like Response to Intervention.

But there have been unwanted side effects.

First, we have lost clarity about the ultimate goal in literacy instruction: reading with comprehension, not fluency itself.

Second, we have oversold WCPM as the most appropriate gauge of meaningful growth, for both high performing and emergent readers.

And finally, we have grown accustomed to a tremendous time-cost that robs kids of interactive literacy instruction.

With technology, how can we evolve fluency assessment to a place of greater relevance and efficiency?

Comprehension: In many systems, oral reading fluency is assessed in isolation, without comprehension checks on what the student reads aloud. Instead of adjusting their rate to best support their own understanding, many kids aim to read as quickly as possible. If we want to keep reading comprehension as the central goal, then we need to attend to the message we are sending kids. It’s not that better reading is faster reading; instead, better reading means understanding more from the text.

Meaningful growth: Increases in words correct per minute (WCPM) are most meaningful as kids are developing automaticity with words. As readers move from 10 WCPM to 90 WCPM, they free up mental space, no longer spending so much attention on sounding out, re-trying, and self-correcting at the single word level. Tuning in to the meaning becomes possible, when words are more automatic.

But is faster always better?

No. For kids who are reading smoothly from grade level text—and understanding it—we don’t care if they can read faster. For these kids, meaningful growth is about handling harder and harder texts with good comprehension.

Assessing only WCPM can fail kids on the lower end of reading development, too.

When a student can read zero WCPM for several months, “no growth” is not an appropriate conclusion.

Instead, meaningful growth is likely happening in skills like phonemic awareness, letter knowledge, oral language, and vocabulary. Passage-based fluency assessment should be located part way across a progression of measures. It strains human capacities to flex across this progression on the fly, for each unique child. Here is where technology can help us: assessment can adapt across the progression, to find growth.

Time: In many schools, all students in the first and second grade are given oral reading fluency assessments one-on-one. To assess a whole class each season, the teacher gives up several days of literacy instructional time — instruction that we know should be characterized by responsive, high quality teacher-student interactions.

In some schools the WCPM approach has been supplemented with richer information from one-to-one assessment. These tools typically include running records, comprehension checks, and adapting to find an instructional text level. To accomplish this, teachers often spend half an hour or more per child. That easily adds up to three weeks each season, taken away from instruction in the literacy block.

New solutions: Fluency assessment hasn’t changed much in 25 years. When technology was first enlisted, it typically meant trading the teacher’s pencil and paper for stylus and screen. This method still required one-on-one administration, so teachers did not get significant time back for instruction. The method did not accomplish a focus on comprehension, nor did it adapt across a progression to find relevant growth.

How might current technology help us to redesign fluency assessment for the better?

Two places of leverage are key: computer adaptive testing and automatic speech processing.

First, the mature technology of adaptivity can locate meaningful growth for each individual. Imagine an assessment that adjusts to each child based on a foundational skills progression or on text complexity.

Second, let’s shift our thinking to recognize that today, we regularly speak to our technology. We have phones and tablets—maybe even a little smart speaker—that process speech masterfully. Our children are growing up in a world where speech processing is pervasive. What if fluency assessment no longer requires one-on-one teacher administration in every instance?

Let’s use technology to design a better solution for fluency assessment.

Cindy Jiban is a senior curriculum specialist for NWEA. She has taught in elementary and middle schools, both as a classroom teacher and as a special educator. She earned her Ph.D. in Educational Psychology at the University of Minnesota, focusing on intervention and assessment for students acquiring foundational academic skills. After contributions at the Research Institute on Progress Monitoring, the National Center on Educational Outcomes, and the Minnesota Center for Reading Research, Cindy joined NWEA in 2009.