Talking Teaching

April 25, 2015

how do we assess teaching quality?

Way back when I was a secondary teacher, & there were signs that the government of the day was looking at a possible move to performance pay, there were fairly frequent staffroom discussions discussions around how to assess the quality of one’s teaching. (There’s a much more recent report on this subject here.) One metric proposed was how many of your students passed School Cert. (I told you it was a long time ago!) That was all very well for those whose classes – we had streamed classes at my school – contained students who could mostly be expected to achieve rather well. I had one of those, but I also had the ‘problem’ 4th-form (year 10) class: kids who for a variety of reasons weren’t viewed by many as likely to pass.

I had no problems with that class. I had to teach them science, and so we ‘did’ science in contexts that they found engaging & relevant: the science of cooking, the science of cosmetics, & so on. We had a ball, & in the process they seemed to absorb some knowledge of science: what it was, & how it worked. But mostly they still didn’t attempt School C (the equivalent of today’s NCEA Level 1), & so by that rubric I’d have been judged a poor teacher. Perhaps, if we’d looked systematically at the level of prior knowledge those students entered my class with, and assessed the gains they made on that, both they and I would have been judged differently.

I was reminded of this during a discussion today about assessing the quality of teachers in a university setting. Now sure, we have a system of paper appraisals and teaching appraisals. But they aren’t shared with line managers as a matter of course, and so that can make things difficult during goal-setting and promotion rounds. For in the absence of that information, just how do line managers (& others) come to any evidence-based assessmentof a teacher’s abilities and performance in the classroom? I suspect the short answer is that they can’t, not really.

But even where the appraisal data are available, they shouldn’t be the only tool individuals (& managers) use to assess performance. I’m often told the appraisals are easy to ‘game’, although I’m not sure how correct that is; it does tend to assume that students aren’t able to assess papers and teacher performance reasonably well. I mean, statements like “this teacher made it clear what was expected of
me”, “this teacher made the subject interesting”, and “this teacher was approachable when advice or
help was required” are fairly objective, after all. But ideally they’d be just one element in an educator’s portfolio.

That portfolio could also include notes and commentary from an option that teachers in the compulsory sector will be used to: having a colleague sit in on a class and provide constructive feedback afterwards. In my experience this is rare in universities, which is a real pity, because both parties can learn a good deal from the experience. (We are accustomed, and encouraged, to have others cast a critical eye on our research outcomes, so why not our teaching?)

It could also include notes & reflections from the education literature. I firmly believe that while my teaching has to be informed by current research in my discipline (& I simply can’t imagine teaching the same thing, year after year!), it must also be informed by findings from research into pedagogy.  Things change, after all. Teaching & learning methods that might have seemed to work for those who taught me at uni are almost certainly out of date in today’s classrooms. As regular readers will know, I put much of my own reflection into writing these blog posts: the blog makes up a largish part of my own portfolio.

And of course, if you’re dipping into the literature, and attending seminars or workshops from your equivalent of our Teaching Development Unit, then you’ll pick up all sorts of other, informal, tips for gaining feedback on how things are going in the classroom. It’s worth linking back to a guest post from a my friend & colleague Brydget, as she summarises all this very well.

The trick, of course, is to work out how to present that information to one’s line manager :)

April 15, 2015

sustainability & on-line learning…

… and serendipity!

I’ve just participated in a great AdobeConnect session, run by the university’s Centre for e-Learning, on the interfaces between academic publications and social media. It was fun, educational, & thought-provoking & has provided something of a spur to my own thinking about the value** of social media in this particular sphere. (For example, while academics are pressured to publish, & the number & position (journal) of those publications is seen as a measure of their worth, you could well ask what the actual value of the work is if few or no people actually read it. I’ve got another post lined up about this.)

Anyway, one of the things that I brought into the conversation was the value of Twitter (& Facebook) in terms of finding new information in fields that interest me. (I know that a lot of my recent blog posts have developed from ideas sparked by FB sources.) I’m a fairly recent convert to Twitter but have enjoyed several tweeted conversations about science communication & science education, and I do keep an eye on posts from those I’m ‘following’ in case something new crops up.

And so it was that when I started following our AdobeConnect host, this popped up:

Stephen’s link takes you to this article: net positive valuation of online education. The executive summary makes very interesting reading at a time when ‘we’ (ie my Faculty) are examining ways to offer our programs to a changing student demographic. It finds that on-line learning as a means of delivering undergraduate programs opens up access for people who don’t fit the ‘typical traditional undergraduate’ profile, such that those people may end up with greater life-time earnings & tax contributions, and reduced use of social services. And using on-line learning pedagogies & technologies seem to result in a reduced environmental footprint for the degrees: the authors estimate that on-line learning delivery of papers saves somewhere between 30 & 70 tonnes of CO2 per degree, because of the reduction in spending both on travel to & from campus, and on bricks & mortar.

There’s an excellent infographic here, and I can see why the report would indicate that the institution they surveyed (Arizona State University, ASU) would say that

[i]n the near term, nearly 100 percent of an institution’s courses, both immersive and virtual, will be delivered on the same technology platforms.

However, there are caveats.  ASU has obviously got a fairly long history of using e-learning platforms. This is not simply a matter of taking an existing paper (or degree program), making its resources available on-line, & saying ‘there! we’re doing e-learning’. Because unless the whole thing is properly thought through, the students’ learning experiences may not be what their educators would like to think.

In other words, this sort of initiative involves learning for both students and educators – and the educators’ learning needs to come first.

 

** As an aside, here’s an example of what could be called ‘crowd-sourcing’ for an educational resource, via twitter. But the same could easily be done for research.

October 27, 2014

widening the definition of what constitutes scientific communication and publishing

Filed under: university — Tags: , , , , — alison @ 4:35 pm

This blog post at SkepticalScalpel really struck a chord. Entitled “Should social media accomplishments be recognised by academia”, it compares the number of citations the author’s received for published papers with the number of hits on a blog post reviewing original research. And finds there’s no contest:

Three years ago, I wrote “Statistical vs. Clinical Significance: They Are Not the Same,” which reviewed a paper on sleep apnea …

That post has received over 13,400 page views, certainly far exceeding the number of people who have read my 97 peer-reviewed papers, case reports, review articles, book chapters, editorials, and letters to journal editors.

The SkepticalScalpel author also notes that this sort of on-line peer-review and discussion of data can have rapid, effective results:

Last year, some Australians, blogging at the Intensive Care Network, found that the number needed to treat stated in a New England Journal paper on targeted vs. universal decolonization to prevent ICU infection was wrong. They blogged about it and contacted the lead author who acknowledged the error within 11 days. It took the journal 5 months to make the correction online.

PZ Myers has also advocated for such on-line, social-media-mediated, peer review, pointing to microbiologist Rosie Redfield as a great example of how this works. (The discussion at that last link shows ‘open’ peer-review in operation – and posts like that will have attracted a far wider audience than the original paper.)

But wait, there’s more! At Scientific American, Simon Owens writes about Kathleen Mandt: a scientist who’s become part of probably the biggest two-way stream of communication between scientists and the general public in the world, via Reddit.

R/science is a default subreddit, meaning it’s visible to people visiting Reddit.com even if they aren’t logged in. According to internal metrics, r/science draws between 30,000 and 100,000 unique visitors a day. It’s arguably the largest community-run science forum on the Internet. And starting in January r/science officially launched its own Science AMA series, and very quickly scientists who are producing interesting, groundbreaking research but not widely known to others outside their fields began answering questions on the front page of a site that is visited by 114 million people a month (this includes registered and casual visitors.).

Most scientific research is published in expensive journals, some of which are not available in smaller libraries. And the vast majority of findings never receive media coverage. “Really, the only way people get to find out about new research is if they have journal access or if they read the short-form news story that can be skewed by whatever journalist is covering it,” says Chris Dawson, another r/science mod. “If you had questions about the study then there wasn’t a good way to get them answered, and now you can.” Virtually overnight, Reddit had created the world’s largest two-way dialogue between scientists and the general public.

Of course there are limitations to this mode of communication. Questions may be off the point, ie not directly related to the research under discussion. This is hardly surprising, but it’s something that a good science journalist can avoid. (However, good science journalism, in the mainstream media, is a fairly rare beast.) And the r/science moderators do have to make some careful decisions around which researchers to invite into the forum.

But overall, in terms of getting information out there with the potential for meaningful, rapid interaction with one’s audience, and a much bigger audience at that, science blogs and venues like Reddit’s r/science probably win hands-down over more conventional modes of publication. As Owens says,

This year’s Science AMAs overall reveal that r/science fulfills a public need that’s unforeseen, unknown, unaddressed or not fully embraced by the scientific community. In a world where the general public often finds it frustratingly difficult to access scholarly journals, demand remains for a way to connect scientists and their work with nonscientists. With the rise of MOOCs and other digital tools such as Reddit, science communication has expanded well beyond its traditional confines in the ivory tower.

So is it time, as Skeptical Scalpel says, for measures of scholarly output to be broadened when it comes time for promotion?

 

February 11, 2014

musings on moocs

I’ve had a few conversations lately around the topic of Massive Open On-line Courses (or MOOCs). These fully on-line courses, which typically have very high enrolments, have become widely available from overseas providers (my own institution recently developed and ran the first such course in New Zealand, which I see is available again this year). If I had time I’d probably do the occasional one for interest (this one on epigenetics caught my eye).

Sometimes the conversations include the question of whether, and how much, MOOCs might contribute to what’s generally known as the ‘universities of the future’. This has always puzzled me a bit, as in their current incarnation most MOOCs don’t carry credit (there are exceptions), so don’t contribute to an actual degree program; they would seem to work better as ‘tasters’ – a means for people to see what a university might have to offer. Depending on their quality, they could also work to encourage young people into becoming more independent learners, regardless of whether they went on to a university – there’s an interesting essay on this issue here. So I thought it would be interesting to look a bit more closely.

Despite the fact that these courses haven’t been around all that long, there’s already quite a bit published about them, including a systematic review of the literature covering the period 2008-2012 by Liyanagunawardena, Adams, & Williams (2013), and a rather entertaining and somewhat sceptical 2013 presentation by Sir John Daniel, (based largely on this 2012 paper).

The term MOOC has only been in use since 2008, when it was first coined for a course offered by the University of Manitoba, Canada (Lianagunawardena et al, 2013). Daniel comments that the philosophy behind early courses like this was one of ‘connectiveness’, such that resources were freely available to anyone, with learning shared by all those in the course. This was underpinned by the use of RSS feeds, Moodle discussions, blogs, Second Life, & on-line meetings. He characterises ‘modern’ MOOCs as bearing little relation, in their educational philosophy, to these early programs, viewing programs offered by major US universities as

basically learning resources with some computerised feedback. In terms of pedagogy their quality varies widely, from very poor to OK.

Part of the problem here lies with the extremely large enrolments in today’s MOOCs, whereas those early courses were small enough that some semi-individualised interactions between students and educators were possible. Unfortunately the combination of variable pedagogy plus little in the way of real interpersonal interactions in these huge classes also sees them with very high drop-out rates: Liyanawardena and her colleagues note that the average completion rate is less than 10% of those beginning a course, with the highest being 19.2% for a Coursera offering.

Daniel offers some good advice to those considering setting up MOOCs of their own, given that currently – in his estimation – there are as yet no good business models available for these courses. Firstly: don’t rush into it just because others are. Secondly,

have a university-wide discussion on why you might offer a MOOC or MOOCs and use it to develop a MOOC strategy. The discussion should involve all staff members who might be involved in or affected by the offering of a MOOC.

His third point: ensure that any MOOC initiatives are fully integrated into your University’s strategy for online learning (my emphasis). To me this is an absolute imperative – sort the on-line learning strategy first, & then consider how MOOCs might contribute to this. (Having said that, I notice that the 2014 NMC Horizon report on higher education, by Johnson et al.,  sees these massive open on-line courses as in competition with the universities, rather than complementary to their on-campus and on-line for-credit offerings. And many thanks to Michael Edmonds for the heads-up on this paper.)

This is in fact true for anything to do with moving into the ‘universities of the future space (with or without MOOCs). Any strategy for online learning must surely consider resourcing: provision not only of the hardware, software, and facilities needed to properly deliver a ‘blended’ curriculum that may combine both face-to-face and on-line delivery, but also of the professional development needed to ensure that educators have the pedagogical knowledge and skills to deliver excellent learning experiences and outcomes in what for most of us is a novel environment. For there’s far more to offering a good on-line program than simply putting the usual materials up on a web page. A good blended learning (hybrid) system must be flexible, for example; it must suit

the interests and desires of students, who are able to choose how they attend lecture – from the comfort of their home, or face-to-face with their teachers. Additionally, … students [feel] the instructional technology [makes] the subject more interesting, and increase[s] their understanding, as well as encourag[ing] their participation… (Johnson et al., 2014).

This is something that is more likely to encourage the sort of critical thinking and deep learning approaches that we would all like to see in our students.

Furthermore, as part of that hybridisation, social media are increasingly likely to be used in learning experiences as well as for the more established patterns of social communication and entertainment (eg Twitter as a micro-blogging tool: Liyanagunawardena et al., 2013). In fact, ‘external’ communications (ie outside of learning management systems such as Moodle) are likely to become more significant as a means of supporting learner groups in this new environment – this is something I’m already seeing with the use of Facebook for class discussions and sharing of ideas and resources. Of course, this also places demands on educators:

Understanding how social media can be leveraged for social learning is a key skill for teachers, and teacher training programs are increasingly being expected to include this skill. Understanding how social media can be leveraged for social learning is a key skill for teachers, and teacher training programs are increasingly being expected to include this skill. (Johnson et al., 2014).

There is also a need, in any blended learning system, to ensure skilled moderation of forums and other forms of on-line engagement, along with policies to ensure privacy is maintained and bullying and other forms of unacceptable behaviour are avoided or nipped in the bud (Liyanawardena et al. 2013; Johnson et al., 2014). And of course there’s the issue of flipped classrooms, something that the use of these technologies really encourages but which very few teaching staff have any experience of.

Another issue examined by Liyanagunawardena and her colleagues, in their review of the MOOC literature, is that of digital ‘natives': are our students really able to use new learning technologies in the ways that we fondly imagine they can? This is a question that applies just as well to the hybrid learning model of ‘universities of the future’. In one recent study cited by the team, researchers found that of all the active participants in a particular MOOC, only one had never been involved in other such courses. This begs the question of “whether a learner has to learn how to learn” in the digital, on-line environment. (Certainly, I’ve found I need to show students how to download podcasts of lectures, something I’d naively believed that they would know how to do better than I!) In other words, any planning for blended delivery must allow for helping learners, as well as teachers, to become fluent in the new technologies on offer.

We live in interesting times.

And I would love to hear from any readers who have experience in this sort of learning environment.

T.R.Liyanagunawardena, A.A.Adams & S.A.Williams (2013) MOOCs: a systematic study of the published literature 2008-2012. The International Review of Research in Open and Distance Learning 14(3): 202-227

L.Johnson, S.Adams Becker, V.Estrada, & A.Freeman (2014) NMC HOrizon Report: 2014 Higher Education Edition. Austin, Texas. The New Media Consortium. ISBN 978-0-9897335-5-7

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 191 other followers