Will the digital tool I’m thinking of using with my students tomorrow widen or narrow the achievement gaps between students in this class?
What instructional choices could I make to increase the chances of this digital tool narrowing these gaps rather than widening them—or creating new ones?
I confess these are not the sorts of questions that are usually on my mind the day before I introduce a new digital reading or writing tool to my students. I’m usually thinking about what I’ll do if the Internet connection is glitchy or how much time it will take us to create new accounts and get down to work. When I ask my colleagues, their response is similar: there are so many things we’re already juggling, the impact digital tools may have on achievement gaps is not really something we have time to think about.
But maybe it should be.
Something I’ve started noticing is that, while a strong majority of my students enjoy using new digital tools, there are significant differences between students in terms of what they actually do with a new tool as well as what they take away from a technology-infused lesson or unit to apply in their future learning.
Some students come to class with rich prior experience not just with social media apps, but with content creation tools. They readily “get” a new tool such as Citelighter
(for online annotation and curation of sources) or Voicethread (for multimodal presentation and discussion of texts)—they play with it fearlessly, figure out its affordances and constraints, and by the end of class they are showing me something I had no idea the tool could do.
Other students are initially less knowledgeable and more tentative. They follow the directions. When they get stuck, they rely on others for help. At the end of the day they may have accomplished the assigned tasks, but if I later ask them what they thought of the new tool, their response is likely to focus on a frustrating or fun moment (e.g., “It was frustrating when I couldn’t get the highlighter to change color”; “It was fun to record our voices”), not on the tool’s key affordances.
And these observations have got me thinking about Matthew effects.
Matthew Effects—the “Old” Kind
In 1986, Stanovich drew everyone’s attention to the phenomenon of dramatically diverging learning trajectories experienced by students with different initial levels of reading-relevant knowledge. Students who start with strong phonemic awareness, alphabet knowledge, and vocabulary knowledge get off to a strong start and generally keep doing well. From early on, reading for them is about learning new words, acquiring interesting background knowledge, and engaging with meaning and ideas. And because early success in these areas increases their motivation to read, their performance goes up not just incrementally but by leaps and bounds: “early achievement spawns faster rates of subsequent achievement” (my emphasis, p. 381).
By contrast, students who struggle early on are much more likely not just to remain behind their peers but to gradually lag farther and farther behind. When they read, most or all of their cognitive resources are devoted to laborious decoding of words. Indeed, this decoding work is so onerous that they don’t often get to engage with meaning and ideas or learn new words and background knowledge. Their progress is slow and incremental at best.
Matthew Effects—the New Digital Kind
Now let’s add to the mix Google Search and a sampling of web tools featured in past TILE-SIG blog posts including online annotation tools (e.g., Citelighter), multimodal composing tools (e.g., Voicethread), resource curation tools (e.g., Symbaloo), and some free educational iPad Apps.
For some educators, the hope has been that these tools would somehow help to level the playing field and close achievement gaps—at least under conditions of equal access to screens and Internet connectivity. As one colleague put it a couple of years back: “Now my kids with less background knowledge will be able to Google the words and the information they don’t know. When they read a difficult text, they’ll be in much better shape than before.”
This colleague and I now shake our heads at our past naiveté.
The reality we’re seeing is that, with new digital tools in the mix, we may have opened the door to a new class of digital Matthew Effects.
Take something as basic as search engine use. Some 6-year-olds now start school with considerable experience and expertise with “Googling” information (Dodge, Husain, Duke, 2011; Rideout, 2013). And by the upper-elementary grades this initial difference between students can turn into a significant skill and knowledge gap that’s hard to close—because of a dynamic that’s similar to what Stanovich described for early reading development. Students who get an early start “Googling” keep getting better. For them, searching reliably leads to new knowledge and vocabulary, and searching for information consequently feels fun and rewarding. Here again, “early achievement spawns faster rates of subsequent achievement.”
Other students don’t get the early start and are much less adept. Their searches are imprecise and often don’t lead to useable results. Consequently, over time, they don’t get the same boost to their background knowledge and vocabulary growth that their more skillful classmates enjoy. They tend to be less motivated to Google for information, and when they do, they are less persistent.
And the obvious remedy here—devoting class time to getting all students up to speed—may not be the quick fix we hope it will be. It takes time and, done well, really needs to involve the full gradual release of responsibility model (Pearson & Gallagher, 1983): explanation, teacher modeling, guided practice, and independent practice.
It also risks creating a version of the situation that Allington (1983) warned about in his article “The reading instruction provided readers of differing reading abilities”: well-intentioned teachers (yes, I include myself here) giving their striving readers additional time with phonics practice and other forms of remedial instruction that in effect deprives these students of richer literacy experiences—meaning-focused discussion about story characters, interesting information, etc. Today the danger is that, if we pull some students aside to work with them on the basics of using a new digital tool, they may miss out on the fun and engaging work of creating content or discussing new information and ideas.
None of this makes me or my colleagues think we should pull back on our integration of digital tools. Still, it has given us pause and made us think harder—or at least make a commitment to think harder in the future—about what we can do to mitigate new types of Matthew effects in our classrooms.
From conversations with colleagues, I have distilled the following four preliminary ideas:
1) Seize every opportunity to help students distinguish between a new digital tool’s “bells and whistles” features and its more important cognitive affordances—for supporting some aspect(s) of the mental work involved in reading or writing. Being explicit about these affordances may help less tech-savvy students stay focused on what’s most important for their learning—and why they may want to remember today’s digital tool for possible later use.
2) Over the course of a semester, put students in mixed-ability “tech mentor” groups. Each group is responsible for helping the teacher give the class a basic orientation tour and answer questions about one digital tool. (The idea is that, if everyone develops above-average expertise with at least one digital tool, this will generate confidence and seed future development of expertise with other tools.)
3) Seize every opportunity to communicate with parents and guardians about the tools you’re using and their educational value—and point out possible parental uses of new digital tools (e.g., Google Docs)! Especially with our younger students, home support and encouragement may play an important role in sustaining interest and growth over time.
4) Whenever possible, try to coordinate your efforts with those of colleagues teaching in grades above and below yours. It will require coordinated efforts across grades to avoid and/or reverse digital Matthew effects.
TILE-SIG will host a special session on Sunday, May 11 at 3:00 p.m. at the International Reading Association 59th Annual Conference in New Orleans. The session includes the presentation of the 2014 Technology in Reading Research Award, "Changing the Landscape of Literacy Teacher Education: Innovations with Generative Technology" with keynote Dana Grisham (National University, TILE-SIG 2013 Reading Research Award Winner), and 18 roundtable discussions about research findings and practical classroom ideas. Visit http://www.iraconference.org to learn more about IRA 2014 or to register.
Paul Morsink is a doctoral candidate in Educational Psychology and Educational Technology at Michigan State University, firstname.lastname@example.org.
Allington, R. L. (1983). The reading instruction provided readers of differing reading abilities. The Elementary School Journal, 83(5), 548-559.
Dodge, A. M., Husain, N., & Duke, N. K. (2011). Connected kids? K-2 children’s use and understanding of the Internet. Language Arts, 89(2), 86-98.
Pearson, P. D., & Gallagher, G. (1983). The gradual release of responsibility model of instruction. Contemporary Educational Psychology, 8, 112-123.
Rideout, V. (2013). Zero to eight: Children’s media use in America 2013. San Francisco: Common Sense Media.
Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360-407.