Friday 16 August 2013

The Luddite Cringe


Any kind of change (especially really big ones that affect heaps of people) generally provoke protests and cautionary tales from various parties who object to what’s being done, often on the grounds that it won’t cater for an important need of some kind or that its benefits are overstated at best or non-existent at worst. Perhaps more importantly, opponents to change will often identify a number of negative implications that they believe others haven’t properly identified or understood. When large-scale changes linked to society are mixed with education, we get even more worried because the whole thing might mess up the future. In all this, often emotive, mix it can be easy to overlook how some of the previous mental models, schema or underlying assumptions that were useful and relevant at one point in history can lead us to interpret things in a problematic way.

This is a particularly pertinent issue with technology and learning because technology has changed relatively quickly in recent history while education has (in many ways) changed somewhat more slowly. As educators responsible for our own learning and the learning of others, it is important that we’re able to think about not only technology and learning together but also the ways in which our own mental models can affect how we think about technology in the learning process. In an attempt to make all this a bit more tangible, I’ll outline a few examples of what I’ve started to label as luddite cringe. Luddite cringe is kind of like cultural cringe, except instead of reacting negatively to say, New Zealand accents in film, we’re reacting in particular ways to technology in education.

There’s a really interesting article here which summarises some recent, potentially surprising, findings in a piece of research around students’ writing skills and ICTs. “He [a student in the survey] also said the auto-correct spelling feature on his iPhone has had a negative effect. "My spelling sucks," he texted.” This quote is a nice example of the impending doom that’s been proclaimed time and time again by concerned citizens since the invention of the short messaging service for mobile phones. Obviously we would be hard pressed to deny that text message abbreviations were going to affect young peoples’ spelling abilities to some degree, I guess the worry around this could be optimistically reframed as a concern around accuracy in the communication of meaning. All that said, the following quote (which sums up some of the findings of the research) is also really interesting. But, he added, "I guess that they [ICTs/digital technologies] all help involve writing in my life because idk [look that one up yourself if you need to] how much I'd write if I didn't have texting and stuff." As the introduction to the research points out “According to teachers, students’ exposure to a broader audience for their work and more feedback from peers encourages greater student investment in what they write and in the writing process as a whole.” I’ll leave it to you to decide whether the tradeoff was worth it.

So what exactly is this luddite cringe and what mental models or inadequate lenses are causing it to happen? In various kinds of luddite cringe I see (from myself often! A few years back I ranted to whoever I could get to listen to me that tablets were evil because they didn’t have keyboards) we make two mistakes:
1) We don’t give enough credit to the agency and intentions of the learner and
2) We fail to recognise what things we think are necessary (or part of conventions, traditions or existing societal structures) that aren’t going to be as bigger deal in the (often but not always near) future. Basically, our mental models aren’t cutting it.

With the mobile phone texting and digital technology luddite cringe, we’ve failed to assume that students will be able to identify any decline in technical writing accuracy themselves and additionally, that they won’t recognise the times when writing and spelling correctly is particularly important versus the times when it's not so essential. It's interesting to note that the student in the survey was able to recognise that correct spelling was at least an issue to consider. Which, given the ongoing grief we do give students about it, is hardly surprising. (Check out the research and the article if you want a more detailed summary of the stats and findings.) Along with all the worry about declining standards in the technical aspects of writing, I think I'd more commonly receive emails at work that aren't capitalised or far from correctly punctuated than I would the ‘proofread’ ones. And I'm talking about emails from all kinds adults here not students. Most of the students I talk to are pretty jolly clear about when writing needs to be technically accurate without me having to harp on about it. And that authentic moment when they do need to learn the skills to get it right will hopefully be one when myself or another teacher are there to help them with it.

The rant potential doesn’t stop there with luddite cringe contributing factor 2. I might be starting to stretch a bit here, but when considering recent ideas on the recasting of knowledge as a verb, placing value on creating knowledge in collaborative settings and finding uses for it rather than passively receiving it as individuals, I also wonder whether we are starting to place greater value on dynamic, ongoing and responsive communication. It may be that there is more recognition for the role this communication plays than a traditional emphasis on single-shot, one-way delivery where an expert passes on information in a one to many medium. Perhaps if the former of these two types of communication, a process that enables flexible, responsive communication as an ongoing process rather than say the act of reading a piece of writing where the event has a clear beginning and end is something we value, then the accurate, precise meanings that might be better communicated with perfect grammar and spelling aren’t quite so important. If a context where we are working towards greater understandings of each other’s ideas (experts and newbies) and better ways of doing things consists of two-way, ongoing communication and a continuing development of knowledge, any inaccuracies in communication can be overcome as the process continues and iterates. It does make a kind of sense then, that in the old school learning where we’re more ‘deeply’ filling our brains with one expert person’s ideas in the course of a discrete event, precise expression and technical accuracy are potentially much more crucial. I’m not suggesting one is necessarily better than the other here - we’re all far better off if students can learn the skills to effectively read a textbook on dam engineering (where the ideas of the expert are particularly essential if you’re living down river) and engage in developing knowledge responsively and collaboratively, perhaps (but not necessarily) not with the help of ICTs. Believing that there’s only a limited number of ways to do develop knowledge well and that these need to be based on older technologies or more accepted ways of doing things is just plain crazy imo. Surely choosing the best method of developing knowledge, fit for purpose, is better than being limited to one way of doing things simply because the technology provided us with one method (ie the printing press) a while back?

Nicholas Carr in his recent book “The Shallows” describes concern over “how the printed book served to focus our attention, promoting deep and creative thought. In stark contrast, the Internet encourages the rapid, distracted sampling of small bits of information from many sources.” Since reading books avidly as a youngster and having access to the internet a little bit later, my experiences have been quite different. If I’m perusing wikipedia (but surely that’s not where you go for serious reading?) or maybe something a bit heavier and academic and come across a term I want to know more about, I can check it easily. I don’t need to drive to the library to get out a different book to read up on important contextual information before I can properly consider the overall picture of what I’m trying to develop an understanding of. After considering a piece of contextual information in the light of whatever else I started with, I’ll often choose to go back to my original topic. In short, the potential offered by the internet is pretty darned awesome and in the way that I think about knowledge, I can actually deepen my understanding of what I’m learning about rather than the opposite. Just as I’m able to, either through external pressure or internal motivation, keep on one topic. We can’t short-change our learners and their ability to develop the skills and dispositions to stay on track with whatever purpose is driving them. Just like us, students can learn not be distracted by all that electronic noise the doom sayers might proclaim is such a large danger.

And what exactly does deep knowledge look like anyway? Is it the findings, opinions of synthesis of one person? The author or a particular book or blog? Why must we follow the thought-train that worked well for one individual just because that’s the detail and order in which they’ve decided to set it down in a book? Is that the only form that deep learning takes? How are they ways we’ve considered knowledge in the past over-feeding our paranoia of what the internet and digital technologies can do to the minds of the nation’s youth? The thing that bothers me the most about the type of thinking that views the internet and digital technologies as promoting “shallowness” is how much is detracts from the agency of the individual learner. If any learner is able to identify purpose, use, relevance and what success looks like in relation to all this, can’t they also be trusted (with the right kind of assistance when needed) to use any source available to develop the knowledge that’s needed? And who’s going to explain to them what kind of knowledge is going them the most in the future anyway?

That massive rant aside, I’m not trying to understate the things we should watch out for when helping our learners to follow whichever information journey they need set out on for a certain purpose, but just as we are continuing to change our view of exactly what teaching and learning is and our role in it, we need to help our students learn how to get the most out of the technology they have access to. While that’s hardly a new idea, perhaps the other thing that’s harder to keep in mind is that we need to do this while simultaneously and constantly revisiting our own mental models in relation to technology and learning. And next time we identify a deep, unarticulated worry that we think might be related to technology and how it could affect our students’ learning, we also need to consider our own view of the world, technology and knowledge and how this might (or might not!) need to change. In an attempt to be more concise: we should always try to  identify the difference between genuine issues and plain old luddite cringe.

No comments:

Post a Comment