Talking Teaching

June 1, 2014

“If you’re going to get lectured at, you might as well be at home in bunny slippers”

This is a post I first wrote for the Bioblog.

There’s an increasing body of literature demonstrating the benefits of active learning for tertiary students taking science subjects. This is a topic I’ve written about before, but I’m always interested in reading more on the subject. And let’s face it, the more evidence the better, when you’re wanting to get lecturers in the sciences engaged in discussion around different ways of teaching. As you’ll have gathered, I find a lot of new science & education material via alerts on Facebook, as well as through the more conventional journal feeds & email alerts, and so it was with this recent paper by Scott Freeman & colleagues, which looks at the effect of active learning on student performance in science, technology, engineering and maths (STEM) classes: I saw it first described in this post1 (whence also comes the quote I’ve used as my title).

The paper by Freeman et al (2014) is a meta-analysis of more than 200 studies of teaching methods used in STEM classes, which included “occasional group problem-solving, worksheets or tutorials completed during class, use of personal response systems with or without peer instruction, and studio or workshop course designs” (ibid.). The impact of the various methods on student learning was measured in two ways: by comparing scores on the same or similar examinations or concept inventories; and by looking at the percentage of students who failed a course.

What did their results show? FIrstly, that students’ mean scores in exams assessing work covered in active learning classes improved by around 6% over more traditional teaching-&-learning formats (& finding that matches those of earlier studies); and secondly, that students in those traditional classes “were 1.5 times more likely to fail”, compared to students given in-class opportunities for active learning (with a ‘raw failure’ rate averaging 33.8% in traditional lecturing classes and 21.8% in more active classes). These results held across all STEM subjects. The researchers also found that active-learning techniques had a stronger effect on concept inventories compared to formal exams (& here I’m wondering if that doesn’t reflect – at least in part – the nature of the exams themselves eg did they give opportunities to demonstrate deep learning?) Interestingly, while the positive impact of active learning was seen across all class sizes, it was more pronounced in classes of less than 50 students.

On the class size thing, I’m wondering if that might be because it’s easier to get everyone actively involved, in a smaller class? For example, I’ve got a colleague at another institution who runs a lot of his classes as ‘flipped’ sessions, and ensures that all students get the opportunity to present to the rest of the group – this is far easier to set up in a class of 50 than in a group with 200+ students in it. (I know! When I run ‘design-a-plant/animal’ sessions, there’s time for only a sub-set of student ‘teams’ to present their creatures to the rest of the class. Plus you really have to work at making sure you get around all teams to talk with them, answer questions, & so on, and so it’s perhaps more likely that someone can remain uninvolved.)

The research team concluded:

Finally, the data suggest that STEM instructors may begin to question the continued use of traditional lecturing in everyday practice, especially in light of recent work indicating that active learning confers disproportionate benefits for STEM students from disadvantaged backgrounds and for female students in male-dominated fields. Although traditional lecturing has dominated undergraduate instruction for most of a millenium and continues to have strong advocates, current evidence suggests that a constructivist “ask, don’t tell” approach may lead to strong increases in student performance, amplifying recent calls from policy-makers and researchers to support faculty who are transforming their STEM courses.

The ‘bunny slippers’ quote from the lead author comes from the post that originally caught my eye. And I suspect there may well be bunny slippers (or the equivalent) in evidence when my own students watch lecture recordings at home :) But this does raise a question around massive open on-line courses (MOOCs), which tend to have a very high ‘fail’ rate – how much of this might be attributed to the difficulty in ensuring opportunities for active learning in these ‘distance’ classes?

And of course, we aren’t really talking a simple dichotomy between ‘traditional’ lecture classes and classes with a very high component of active-learning opportunities – something the research team also note: some of the ‘non-traditional’ methods they surveyed had only a 10-15% ‘active’ component. This is something discussed at more length by Alex Smith in a post entitled “In Defence of the Lecture”. I have to say that his approach sounds very similar to mine, with its mix of socratic questioning, pop quizzes, group discussions, and – yes – sections of ‘lecture’. As Small says:

Not every lecture is a person spending an hour talking nonstop to deliver facts. A good lecture is engaging, it naturally invites discussion and dialogue, it operates at a level much higher than raw information delivery, it is a natural setting for the expert to act as a role model, and it can be integrated with more formal activities (e.g., clicker questions, small-group discussions, etc.).

Lecture should not be the sole means of instruction, and bad lectures are a plague demanding eradication, but we err when we too strenuously inveigh against the lecture.

I couldn’t agree more. And maybe that’s a message that’s being lost in the louder discussion around active learning, and which needs to be heard more widely.

1 The comments thread for this story is also worth reading.

S.Freeman, S.L.Eddy, M.McDonough, M.K.Smith,N.Okorofor, H.Jordt & M.P.Wenderoth  (2014) Active learning increases student performance in science, engineering, and mathematics. http://www.pnas.org/content/early/2014/05/08/1319030111

May 23, 2014

some more thoughts on facebook & student engagement

Filed under: education, university — Tags: , , , , , — alison @ 9:58 pm

After I wrote my last post, on using course-related Facebook pages to help enhance student engagement, I thought I’d see what students thought about the issue. So I shared a link to the post on the FB page run by our biology students, and asked what members had to say. I also mentioned the idea in class, and discussed it with a colleague (we were originally talking about student management systems, but it was one of those wide-ranging chats that grows and grows…).

Somewhat to my surprise – although I guess I hadn’t really given it a lot of thought – there are a lot of FB pages out there with links to various papers and programs. (Our registrar set one up with several friends, when they were working on a group project for a postgraduate paper, as a means of sharing ideas and working on problems.) The students tell me that they found the pages really did help with a sense of ‘belonging’, especially for those who were at a satellite campus or didn’t come onto the actual campus regularly. They gave opportunities to share information, answer questions,  & just be social.

Interestingly, several said that they found our ‘standard’ student learning management system, Moodle, difficult to use from a smart-phone: apparently you need to log on again and again; there’s no means of staying logged in for a day, for example. They also reminded me that with FB, you get notifications whenever someone posts something on a thread you’re following; on Moodle the notifications are less constant and via email.

And apparently some students find Moodle quite intimidating (& I must follow up on why), and people were more likely to comment & to answer each other on student-run class FB pages.

One thing that’s become more & more obvious to me, the more I think about it, is the immediacy and the highly visual nature of FB, as opposed to the text-based look of a Moodle page (and yes, I know you can add pictures!). Here’s a screenshot of part of the page for my A semester class:

Screenshot 2014-05-23 08.50.35

As you can see, it’s all words. If someone wants to see what’s being discussed, they have to open the ‘Discussion forum’ folder, & once they’re in there, they need to open a particular ‘topic’. They need to click on the link for a video or webpage – there’s no enticing link. And so on. Whereas on FB, the video or the page are right there with a nice visual tag. OK, posts and comments build up & will slip to the bottom of the page if they’re not active, but that happens within a Moodle forum as well.

Anyway, what I’m thinking I’ll do is set up a closed page for the B semester paper (students have ask to join) and send the link for the page to everyone in the class. I’ll make it clear that this is as well as and not instead of Moodle, which remains the official means of sharing information & resources. Also, I’ll set it up so the class reps – if they agree! – are admins (& they and I can agree on some basic house rules), so that there’s a feeling that this is more ‘by the students, for the students’. And then we’ll see what happens. (I’m sure I’ll think of more things as we go along!)

What do you think?

May 12, 2014

facebook – more than just social networking

Some of my readers over on Sciblogs will probably have realised that I quite like Facebook – not least because it’s a good source of gorgeous images and quirky facts that can start me thinking about a new science blog post. (You don’t see that side of me here on Talking Teaching :D ) Also, it’s fun keeping in contact with friends & participating in various discussion groups.

One of those groups was set up by the biological sciences students at my institution, and it’s used mainly for sharing biology articles and images, the occasional in-joke :) , and alerting other students to upcoming events that their committee has organised. This particular page sees a bit more student activity than some of our paper-specific moodle pages, so for a while now I’ve wondered about the potential of a good Facebook page to be more than ‘just’ a place to hang out and share pictures & stories.

Anyway, recently I had a conversation (on FB, lol) with a couple of fellow Ako Aotearoa Academy members about this potential. It turns out that they both use FB quite extensively in their teaching lives and gave me a lot of helpful hints – along with a very recent paper on this very subject (Dougherty & Andercheck, 2014).

Kevin Dougherty and Brita Andercheck teach a large (around 200 students) introductory sociology class at Baylor University in the US. Like all those with classes of this size (or larger), they recognised that one of the major issues they face is

the tendency for students to feel like anonymous spectators rather than active, collaborative participants

- that is, there’s a real risk that many students will not properly engage with classroom activities, & that their learning will suffer as a result. I’ve written previously about flipped teaching as an example of a technique to increase student engagement (& performance), but with a range of different learning styles among class members, what works for one student won’t necessarily work for another.

So, how do Dougherty & Andercheck use social media to enhance their students’ engagement with the subject, and their achievement (as measured against the learning objectives for the paper)?

The larger a class gets, the harder it can be – even with the best will in the world – get everyone actively involved in discussions, debates and group work during class time. Teachers might try & manage this using a Student Learning Management System (SLMS) like Moodle but again, many students don’t really engage here either. (Certainly that’s been my own experience.)

The authors wondered, what about Facebook? After all,

[s]ocial media, such as Facebook, Twitter, and Instagram, are part of life for the generation of students now filling college classes

and it’s easy to load material and set up discussion threads. (Even a relatively technological illiterate like me can do it!) Why not use it as a more engaging SLMS, one that’s more likely to get buy-in from students because it’s already familiar to them?

I can just hear the cries of horror that might greet such a proposition. Don’t students already spend far too much time on FB and other networking sites? It would just be a distraction. These are valid objections. But with evidence in favour from a developing body of research into such uses of social media, Dougherty & Andercheck set up a study of the impact of a group FB page on students’ engagement & performance in their own class.

For anyone interested in doing likewise, their paper in Teaching Sociology has a very useful description of how the class page is set up & administered. (One of my Academy colleagues has similar pages for MOOCs that he is involved in; due to their size, he has some students help with the admin.) It was run in parallel with their ‘normal’ SLMS, Blackboard, and the latter was where students obtained class handouts & readings. FB was for sharing & discussion; for videos, news stories, & photos; for the ‘Question of the Day’.

For students unable to participate or uncomfortable participating in the classroom discussion, we invited them to add their thoughts and reflections to the conversation on Facebook. We used poll-style questions on the Facebook Group as another means to engage students.

Students readily got involved, ‘liking’ posts, joining discussions, and posting material. Two weeks into the semester, more than half the class had joined the page, and 2/3 were part of it by the end of the paper. To see how all this activity affected learning outcomes, the researchers carried out content analysis of student postings & matched this to performance, and also asked students for feedback via the usual paper appraisals.

The appraisal data showed that half the class visited the FB page on at least a weekly basis, and that the majority were positive about its effect on their experience in the class. While  24% disagreed (ranging from slight to strong disagreement) that it enhanced their experience, Dougherty & Andercheck noted wryly that “it was students who never or rarely used the Facebook Group who disagreed”. Students also felt that the page gave them a stronger sense of belonging in the course, and also that it positively influenced their achievement of the learning objectives.

Of course, the final proof of the pudding is in the eating (sorry, channeling cooking blog here!): was this reflected in actual performance? The researchers found that FB group membership showed a positive correlation to total quiz points and total points. It also had “a marginally significant, positive relationship” with both a student’s total score for the paper and their score in the final exam, and the number of posts someone made was linked to their quiz score.

What’s more, their analysis of the page’s content and their students’ use of the page clearly shows how involved many class members became in discussion. This is a big point for me: I use Moodle in my own class & it’s sometimes a bit sad to see how little real conversation there is about a topic. We might see a question posted, followed by a couple of answers, & then it all dies down again. Would discussions become deeper & more complex in a different, more familiar (&, let’s face it, less clunky) medium? I guess there really is only one way to find out. (And I’ll be making good use of the very helpful hints provided at the end of this thoughtful, and thought-inspiring, paper!)

K.D.Dougherty & B.Andercheck (2014) Using Facebook to Engage Learners in a Large Introductory Course. Teaching Sociology 42(2): 95-104 DOI: 10.1177/0092055X14521022

February 11, 2014

musings on moocs

I’ve had a few conversations lately around the topic of Massive Open On-line Courses (or MOOCs). These fully on-line courses, which typically have very high enrolments, have become widely available from overseas providers (my own institution recently developed and ran the first such course in New Zealand, which I see is available again this year). If I had time I’d probably do the occasional one for interest (this one on epigenetics caught my eye).

Sometimes the conversations include the question of whether, and how much, MOOCs might contribute to what’s generally known as the ‘universities of the future’. This has always puzzled me a bit, as in their current incarnation most MOOCs don’t carry credit (there are exceptions), so don’t contribute to an actual degree program; they would seem to work better as ‘tasters’ – a means for people to see what a university might have to offer. Depending on their quality, they could also work to encourage young people into becoming more independent learners, regardless of whether they went on to a university – there’s an interesting essay on this issue here. So I thought it would be interesting to look a bit more closely.

Despite the fact that these courses haven’t been around all that long, there’s already quite a bit published about them, including a systematic review of the literature covering the period 2008-2012 by Liyanagunawardena, Adams, & Williams (2013), and a rather entertaining and somewhat sceptical 2013 presentation by Sir John Daniel, (based largely on this 2012 paper).

The term MOOC has only been in use since 2008, when it was first coined for a course offered by the University of Manitoba, Canada (Lianagunawardena et al, 2013). Daniel comments that the philosophy behind early courses like this was one of ‘connectiveness’, such that resources were freely available to anyone, with learning shared by all those in the course. This was underpinned by the use of RSS feeds, Moodle discussions, blogs, Second Life, & on-line meetings. He characterises ‘modern’ MOOCs as bearing little relation, in their educational philosophy, to these early programs, viewing programs offered by major US universities as

basically learning resources with some computerised feedback. In terms of pedagogy their quality varies widely, from very poor to OK.

Part of the problem here lies with the extremely large enrolments in today’s MOOCs, whereas those early courses were small enough that some semi-individualised interactions between students and educators were possible. Unfortunately the combination of variable pedagogy plus little in the way of real interpersonal interactions in these huge classes also sees them with very high drop-out rates: Liyanawardena and her colleagues note that the average completion rate is less than 10% of those beginning a course, with the highest being 19.2% for a Coursera offering.

Daniel offers some good advice to those considering setting up MOOCs of their own, given that currently – in his estimation – there are as yet no good business models available for these courses. Firstly: don’t rush into it just because others are. Secondly,

have a university-wide discussion on why you might offer a MOOC or MOOCs and use it to develop a MOOC strategy. The discussion should involve all staff members who might be involved in or affected by the offering of a MOOC.

His third point: ensure that any MOOC initiatives are fully integrated into your University’s strategy for online learning (my emphasis). To me this is an absolute imperative – sort the on-line learning strategy first, & then consider how MOOCs might contribute to this. (Having said that, I notice that the 2014 NMC Horizon report on higher education, by Johnson et al.,  sees these massive open on-line courses as in competition with the universities, rather than complementary to their on-campus and on-line for-credit offerings. And many thanks to Michael Edmonds for the heads-up on this paper.)

This is in fact true for anything to do with moving into the ‘universities of the future space (with or without MOOCs). Any strategy for online learning must surely consider resourcing: provision not only of the hardware, software, and facilities needed to properly deliver a ‘blended’ curriculum that may combine both face-to-face and on-line delivery, but also of the professional development needed to ensure that educators have the pedagogical knowledge and skills to deliver excellent learning experiences and outcomes in what for most of us is a novel environment. For there’s far more to offering a good on-line program than simply putting the usual materials up on a web page. A good blended learning (hybrid) system must be flexible, for example; it must suit

the interests and desires of students, who are able to choose how they attend lecture – from the comfort of their home, or face-to-face with their teachers. Additionally, … students [feel] the instructional technology [makes] the subject more interesting, and increase[s] their understanding, as well as encourag[ing] their participation… (Johnson et al., 2014).

This is something that is more likely to encourage the sort of critical thinking and deep learning approaches that we would all like to see in our students.

Furthermore, as part of that hybridisation, social media are increasingly likely to be used in learning experiences as well as for the more established patterns of social communication and entertainment (eg Twitter as a micro-blogging tool: Liyanagunawardena et al., 2013). In fact, ‘external’ communications (ie outside of learning management systems such as Moodle) are likely to become more significant as a means of supporting learner groups in this new environment – this is something I’m already seeing with the use of Facebook for class discussions and sharing of ideas and resources. Of course, this also places demands on educators:

Understanding how social media can be leveraged for social learning is a key skill for teachers, and teacher training programs are increasingly being expected to include this skill. Understanding how social media can be leveraged for social learning is a key skill for teachers, and teacher training programs are increasingly being expected to include this skill. (Johnson et al., 2014).

There is also a need, in any blended learning system, to ensure skilled moderation of forums and other forms of on-line engagement, along with policies to ensure privacy is maintained and bullying and other forms of unacceptable behaviour are avoided or nipped in the bud (Liyanawardena et al. 2013; Johnson et al., 2014). And of course there’s the issue of flipped classrooms, something that the use of these technologies really encourages but which very few teaching staff have any experience of.

Another issue examined by Liyanagunawardena and her colleagues, in their review of the MOOC literature, is that of digital ‘natives': are our students really able to use new learning technologies in the ways that we fondly imagine they can? This is a question that applies just as well to the hybrid learning model of ‘universities of the future’. In one recent study cited by the team, researchers found that of all the active participants in a particular MOOC, only one had never been involved in other such courses. This begs the question of “whether a learner has to learn how to learn” in the digital, on-line environment. (Certainly, I’ve found I need to show students how to download podcasts of lectures, something I’d naively believed that they would know how to do better than I!) In other words, any planning for blended delivery must allow for helping learners, as well as teachers, to become fluent in the new technologies on offer.

We live in interesting times.

And I would love to hear from any readers who have experience in this sort of learning environment.

T.R.Liyanagunawardena, A.A.Adams & S.A.Williams (2013) MOOCs: a systematic study of the published literature 2008-2012. The International Review of Research in Open and Distance Learning 14(3): 202-227

L.Johnson, S.Adams Becker, V.Estrada, & A.Freeman (2014) NMC HOrizon Report: 2014 Higher Education Edition. Austin, Texas. The New Media Consortium. ISBN 978-0-9897335-5-7

December 12, 2013

Evaluating teaching the hard-nosed numbers way

[This is a copy of a post on my blog PhysicsStop, sci.waikato.ac.nz/physicsstop, 10 December 2013]

Recently there’s been a bit of discussion in our Faculty on how to get a reliable evaluation of people’s teaching. The traditional approach is with the appraisal. At the end of each paper the students get to answer various questions on the teacher’s performance on a five-point Likert Scale (i.e. ‘Always’, ‘Usually’, ‘Sometimes’, ‘Seldom’, ‘Never’.)  For example: “The teacher made it clear what they expected of me.” The response ‘Always’ is given a score of 1, ‘Usually’ is given 2, down to ‘Never’ which is given a score of 5. An averaged response of the questions across students gives some measure of teaching success – ranging in theory from 1.0 (perfect) through to 5.0 (which we really, really don’t want to see happening).

We’ve also got a general question – “Overall, this teacher was effective”. This is also given a score on the same scale.

A question that’s been raised is: Does the “Overall, this teacher was effective” score correlate well with the average of the others?

I’ve been teaching for several years now, and have a whole heap of data to draw from. So, I’ve been analyzing it (for 2008 onwards), and, in the interests of transparency, I’m happy for people to see it.  For myself, the question of “does a single ‘overall’ question get a similar mark to the averaged response of the other questions?” is a clear yes. The graph below shows the two scores plotted against each other, for different papers that I have taught. For some papers I’ve had a perfect score – 1.0 by every student for every question. For a couple scores have been dismall (above 2 on average):

Capture1.JPG

What does this mean? That’s a good question. Maybe it’s simply that a single question is as good as a multitude of questions if all we are going to do is to take the average of something. More interesting is to look at each question in turn. The questions start with “the teacher…” and then carry on as in the chart below, which shows the responses I’ve had averaged over papers and years.
Capture2.JPG
Remember, low scores are good. And what does this tell me? Probably not much that I don’t already know. For example, anecdotally at any rate, the question “The teacher gave me helpful feedback” is a question for which many lecturers get their poorest scores (highest numbers). This may well be because students don’t realize they are getting feedback. I have colleagues who, when they give oral feedback, will prefix what they say with “I am now giving you feedback on how you have done” so that it’s recognized for what it is.
So, another question. How much have I improved in recent years? Surely I am a better teacher now than what I was in 2008. I really believe that I am. So my scores should be heading towards 1.  Well, um, maybe not. Here they are. There are two lines – the blue line is the response to the question ‘Overall, this teacher was effective’, averaged over all the papers I took in a given year; the red line is the average of the other questions, averaged over all the papers. The red line closely tracks the blue – this shows the same effect as seen on the first graph. The two correlate well.
Capture3.JPG
So what’s happening. I did something well around 2010 but since then it’s gone backwards (with a bit of a gain this year – though not all of this year’s data has been returned to me yet). There are a couple of comments to make. In 2010 I started on a Post Graduate Certificate of Tertiary Teaching. I put a lot of effort into this. There were a couple of major tasks that I did that were targeted at implementing and assessing a teaching intervention to improve student performance. I finished the PGCert in 2011. That seems to have helped with my scores, in 2010 at least. A quick peruse of my CV, however, will tell you that this came at the expense of research outputs. Not a lot of research was going on in my office or lab during that time.  And what happened in 2012? I had a period of study leave (hooray for research outputs!) followed immediately by a period of parental leave. Unfortunately, I had the same amount of teaching to do and that got squashed into the rest of the year. Same amount of material, less time to do it, poorer student opinions. It seems a logical explanation anyway.
Does all this say anything about whether I am an effective teacher? Can one use a single number to describe it? These are questions that are being considered. Does my data help anyone to answer these questions? You decide.

November 21, 2013

learning leadership

and yes, that’s an intentionally ambiguous title :) (The full version was Learning leadership: the interplay between our own professional development and our classroom practice.)

I recently gave a pecha kucha** presentation on this subject, at an Ako Aotearoa mini-symposium up in Auckland. The idea for the subject of my presentation leapt to the front of my mind while I was at a Teaching Network*** meeting looking at how to raise the profile of teaching in tertiary institutions (specifically, universities). One of my colleagues kicked off part of the discussion with a brief talk on developing leadership in teaching, & I thought, all this applies to leading/guiding our students to become better learners, as well. Which is pretty much the thrust of my presentation. I pretty much use slides as talking points:

Slide2

 

Slide3

Slide4

Slide5

Slide6

Slide7

Slide8

Slide9

Slide10

And we finished up with some ideas on what future-focused leadership in teaching and learning could look like.

It would be really good to hear your thoughts on this :)

 

 

** a maximum of 20 slides, with a maximum of 20 seconds per slide. Certainly forces you to focus your ideas. Mine wasn’t that long, because I wanted to use it to spur discussion & so we need time for that in my short presentation slot.

*** an in-house group for staff from across the institution with an interest in all things to do with teaching.

August 23, 2013

what am i?

I’ve been involved in a few discussions lately, on the issue of what ‘we’ actually are. That is, are those of us who work with students in our lecture rooms, laboratories and tut classes, teachers? Is that the label we want attached to ourselves (eg in things like paper & teaching appraisal surveys)?

Disappointingly, there seems to be a fairly large body of opinion that says “no, no that’s not the right name. ‘Teachers’ is what people in schools could be described as. But we’re lecturers, not teachers.” (Someone went so far as to say that using the name ‘teacher’ would only be confusing, as students associated the term with their school experiences & didn’t expect it at university.)

Interestingly, this is not a reflection of how universities are described in the 1989 Education Act. Section 162 of the Act tells us (my emphasis in bold font) that

 universities have all the following characteristics and other tertiary institutions have 1 or more of those characteristics:

  • (i)they are primarily concerned with more advanced learning, the principal aim being to develop intellectual independence:

  • (ii)their research and teaching are closely interdependent and most of their teaching is done by people who are active in advancing knowledge:

  • (iii)they meet international standards of research and teaching:

  • (iv)they are a repository of knowledge and expertise:

  • (v)they accept a role as critic and conscience of society;

and that

  • a university is characterised by a wide diversity of teaching and research, especially at a higher level, that maintains, advances, disseminates, and assists the application of, knowledge, develops intellectual independence, and promotes community learning:

This all makes it fairly clear that the official view of what folks like me do, in our university jobs, is teaching i.e. facilitating advanced learning in our students, helping them to become independent, autonomous learners, and (while last, definitely not least!) promoting learning in the wider community. (I have to say, in Hamilton at the moment, this often feels like an uphill battle in the face of widespread misinformation about water fluoridation. But you can read more about this here, and here.)

And that’s true whether our job descriptions include the word tutor, lecturer, or professor. To me, if the word ‘teaching’ is included in the description of what universities do, then we are ‘teachers’.

Now, I suppose you could argue that I’m just being picky, but I think this is actually quite an important issue as it relates to what we perceive ourselves doing in our classrooms. That’s because if someone sees themselves as a lecturer, & not a teacher, then they could well have a mental image of what the role of ‘lecturer’ entails. And it’s a fair bet that this includes, you know, lecturing: standing in front of a class and delivering 50 minutes of information on a topic in which that person has expertise.

And to me, this is a problem because there’s an increasing body of research now that clearly shows that this passive-student model of teaching & learning – not just lectures, but also ‘cookbook’ lab classes – is probably the least effective thing we can do, in expanding students’ knowledge & understanding of a subject. This was demonstrated very clearly by Richard Hake in his 1998 analysis of the outcomes for more than 6,500 students enrolled in a total of 62 introductory physics courses. Hake found that courses that used ‘interactive-engagement’ techniques for teaching and learning were significantly better – much better – in terms of successful learning and retention of material. Subsequently Carl Wieman and his science-education research group have built on the work of Hake and others in the physics area – have a look at the figures at the end of this 2005 paper, for example: teaching techniques that encourage passive learning by students don’t result in any real long-term learning or retention. Nor is it just physics; I’ve written previously about similar research findings from the area of biology education (e.g. Haak et al. ,2011).

‘Teacher’ to me implies the use of a much wider range of classroom techniques that encourage active student engagement and successful long-term learning. And yes, I’m a teacher, and proud of it!

 

Haak DC, HilleRisLambers J, Pitre E, & Freeman S (2011). Increased structure and active learning reduce the achievement gap in introductory biology. Science , 332 (6034), 1213-6 PMID: 21636776

Hake RR (1998) Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. Am. J. Phys. 66: 64-74

Wieman C & Perkins K (2006) Transforming physics education. Physics Today Online, http://www.physicstoday.org/vol-58/iss-11/p36.shtml

May 20, 2013

out of the mouths of students

First posted over at the Bioblog.

We’ve been trialling some software for on-line paper/teaching appraisals & I got my results back the other day. The appraisal form included open-ended questions where students could give extended feedback on particular issues that concerned them, & I’ve been going through it all so that I can give feedback in my turn, thus ‘closing the loop’. (This is something that I believe is absolutely essential: students need to know that we value their opinions & that, where appropriate, use them to inform what we do.) I’ve been interested to see that some of the class are definitely thinking outside the ‘box’ that represents my paper, and one comment in particular struck a chord:

One concern with the paper is individuals who were not taught certain aspects of the NCEA Level 3 curriculum. This is a major issue that has resulted from the preference of schools to not teach certain aspects of the course. There NEEDS to be consultation to standardise the NCEA curriculum as well as ensuring that the gap is bridged with communication between tertiary education providers and secondary education providers. As I understand it there is significant concern over the changed NCEA Level 3 Biology course, which now does not teach genetics in year 13. I don’t know the answer in the resolution of this issue, however it will greatly impact on future academic success as well as future funding when grades drop.

This student has hit the nail squarely on the head. Teachers reading this will be working on the following Achievement Standards with their year 12 students this year (where previously gene expression was handled in year 13): AS91157 Demonstrate understanding of genetic variation and change, and AS91159: Demonstrate understanding of gene expression. (You’ll find the Biology subject matrix here.)

And as my student says, this has the potential to cause real problems unless the university staff concerned have made it their business to be aware of these changes and to consider their impact. For the 2014 cohort of students coming in to introductory biology classes will have quite different prior learning experiences (& not just in genetics) from those we are teaching this year and taught in previous years. We cannot continue as we have done in the past.

May 13, 2013

selling services on line

Filed under: education, university — Tags: , , , — alison @ 2:05 pm

Yesterday’s Sunday Star-Times carried the headline: Chinese cheats rort NZ universities with fakes. The story begins:

An investigation has uncovered a well-organised commercial cheating service for Chinese-speaking students in New Zealand. The long-standing business uses a network of tutors, some outside New Zealand, to write original assignments ordered by Chinese-speaking students attending New Zealand universities, polytechnics and private institutions

and provides a link to an essay bought by the reporting team as part of their investigation.

Frankly, about the only thing that surprised me about the story was the fact that the organisation delivering this ‘service’, and thus helping those using it to cheat, is based in New Zealand. I mean, I’ve just had one of my regular clean-outs of the spam folder. Anything there just gets deleted; there’s so much coming in that I don’t have time to scan it just in case a genuine commenter has been dumped there. But occasionally something at the top of the queue for oblivion catches my eye, and I notice things like this:

Lately, graduates are overloaded to produce essay writing, they can find custom writing services where they are able to buy critical analysis essays.

If you are desperate, you always have a possibility to purchase high quality essay and all your problems will disappear.

Are willing to be a good student? Therefore, you should realise that good high school students buy paper and if it is fits you, you can do the same!

And the icing on the cake:

Some people have got a passion of composing academic papers, but, some of them do not know the correct way to complete research papers. Professional Custom UK Essay writing service is developed to help students who cannot write.

Frankly, the standard of English in that lot should put potential buyers off! At least some of the time they make an attempt at ‘buyer beware’ (but don’t you just know that the following would link to one of these ‘good’ sites?):

If you want to escape any troubles while ordering essays at the paper writing services, you ought to be really thorough. Buy essay services only if you have solid evidences that the people you’ll be dealing with are highly educated.

Lols aside, there’s obviously a market for this sort of stuff; it’s worth pondering why students would buy in work, and what options teaching staff have for avoiding/reducing the temptation.

One obvious motivation is the pressure to do well. Students (& often their families) do invest quite a bit of money into their education. This is particularly true for many international students whose families spend a lot to send them here & support them during their studies. (So do taxpayers, via the student loan system, so we – ie taxpayers – do need to know that we’re getting good value there, & that includes the quality of students’ work.) So fear of getting a poor mark, & perhaps having to repeat a paper, could drive the sort of behaviour that our spammers and the Auckland organisation are hoping to generate.

And unfortunately ‘custom essays’ are not going to be picked up by anti-plagiarism software (eg Turnitin) – unless the ghostwriters are stupid enough to just do a copy-&-paste! That’s not to say they can’t still be identified: an obvious clue would be a standard of English that differed significantly from that in other work submitted by a student; the relevance of the actual content would be another.

But there are ways of reducing incentives to be dishonest around assessment. For example, teachers can review their use of ‘high stakes’ assessment items: single essays or reports that are worth a large proportion of the final grade (& so can offer some incentive to cheat in order to gain a higher mark). ‘End-loading’ assessment, so that it’s all due at the end of semester, is not going to help here either.

Another tool would be to have students generate work in class. Now obviously that won’t work if you want a lengthy report, but what about: getting them to do the relevant research but asking for them to write an abstract, or a summary of their findings, in-class, & having it peer-marked (using your marking scheme) or doing that task yourself? The students still gain practice in useful skills & – hopefully – your workload is somewhat reduced. If students get more involved in the writing process from the start, & are supported in learning the various skills involved, they might be more confident in their own abilities & feel less need to cheat on the assignment.

Recommended reading**:

J.C.Bean (2001) Engaging Ideas: the professor’s guide to integrating writing, critical thinking, and active learning in the classroom. Jossey-Bass (Wiley). ISBN 978-0-787-90203-2

** actually, make that highly recommended!

October 13, 2012

why kids should grade teachers

Next week my first-year biology students will be doing an appraisal of this semester’s paper, & of those academic staff involved in teaching it. They’re asked about the perceived difficulty of the paper, the amount of work they’re expected to do for it, whether they’ve been intellectually stimulated, the amount of feedback they receive on their work, how approachable staff are, & much else besides. (The feedback one was always my worst scoring attribute – until I asked the students what they thought ‘feedback’ met. It turned out that they felt this described one-to-one verbal communication. We had a discussion about all the other ways in which staff can give feedback – & the scores went up.) The results are always extremely useful, as not only to we find out what’s working, but we also discover what’s not (or at least, what the students perceive as not working) & so may need further attention.

Anyway, my friend Annette has just drawn my attention to a lengthy post in The Atlantic, by Amanda Ridley. It made fascinating reading.

In towns around the country this past school year, a quarter-million students took a special survey designed to capture what they thought of their teachers and their classroom culture. Unlike the vast majority of surveys in human history, this one had been carefully field-tested. That research had shown something remarkable: if you asked kids the right questions, they could identify, with uncanny accuracy, their most – and least – effective teachers.

Ridley, reporting for the Atlantic, was able to follow a 4-month pilot project that was run in 6 schools in the District of Colombia. She notes that about half the states in the US use student test data to evaluate how teachers are doing.

Now, this approach is fraught with difficulty. It doesn’t tell you why children aren’t learning something, for example (or why they do, which is just as interesting). And it puts huge pressure on teachers to ‘teach to the test’ (although Ridley says that in fact “most [American] teachers still do not teach the subjects or grade levels covered by mandatory standardized tests”). It ignores the fact that student learning success can be influenced by a wide range of factors, some of which are outside the schools’ control. (And it makes me wonder how I’d have done, back when I was teaching a high school ‘home room’ class in Palmerston North. Those students made a fair bit of progress, and we all learned a lot, but they would likely not have done too well on a standardised test of academic learning, applied across the board in the way that National Standards are now.)

So, the survey. It grew out of a project on effective teaching funded by the Bill & Melinda Gates Foundation, which found that the top 5 questions – in terms of correlation with student learning – were

  1. Students in this class treat the teacher with respect.
  2. My classmates behave the way my teacher wants them to.
  3. Our class stays busy and doesn’t waste time.
  4. In this class, we learn a lot almost every day.
  5. In this class, we learn to correct our mistakes.

and the version used with high school students in the survey Ridley writes about contained 127 questions. That sounds an awful lot, to me, but apparently most kids soldiered on & answered them all. Nor did they simply choose the same answer for each & every question, or try to skew the results:

Students who don’t read the questions might give the same response to every item. But when Ferguson [one of the researchers] recently examined 199,000 surveys, he found that less than one-half of 1 percent of students did so in the first 10 questions. Kids, he believes, find the questions interesting, so they tend to pay attention. And the ‘right’ answer is not always apparent, so even kids who want to skew the results would not necessarily know how to do it.

OK – kids (asked the right questions) can indicate is a good, effective teacher. What use is made of these results, in the US? The researchers say that they shouldn’t be given too much weighting, in assessing teachers – 20-30% – & only after multiple runs through the instrument, though at present few schools actually use them that way. This is important – no appraisal system should rely on just one tool.

That’s only part of it, of course, because the results are sent through to teachers themselves, just as I get appraisal results back each semester. So the potential’s there for the survey results to provide the basis of considerable reflective learning, given the desire to do so, & time to do it in. Yet only 1/3 of teachers involved in this project even looked at them.

This is a problem in the NZ tertiary system too, & I know it’s something that staff in our own Teaching Development Unit grapple with. Is it the way the results are presented? Would it be useful to be given a summary with key findings highlighted? Do we need a guide in how to interpret them? Do people avoid possibly being upset by the personal comments that can creep into responses (something that can be avoided/minimised by explaining in advance the value of constructive criticism – and by being seen to pay attention to what students have to say)?

Overall, this is an interesting study & one whose results may well inform our own continuing debate on how best to identify excellent teaching practice. What we need to avoid is wholesale duplication and implementation in our own school system without first considering what such surveys can & can’t tell us, and how they may be incorporated as one part of a reliable, transparent system of professional development and goal-setting. And that, of course, is going to require discussion with and support from all parties concerned – not implementation from above.

Older Posts »

The WordPress Classic Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 163 other followers