Talking Teaching

December 12, 2013

Evaluating teaching the hard-nosed numbers way

[This is a copy of a post on my blog PhysicsStop, sci.waikato.ac.nz/physicsstop, 10 December 2013]

Recently there’s been a bit of discussion in our Faculty on how to get a reliable evaluation of people’s teaching. The traditional approach is with the appraisal. At the end of each paper the students get to answer various questions on the teacher’s performance on a five-point Likert Scale (i.e. ‘Always’, ‘Usually’, ‘Sometimes’, ‘Seldom’, ‘Never’.)  For example: “The teacher made it clear what they expected of me.” The response ‘Always’ is given a score of 1, ‘Usually’ is given 2, down to ‘Never’ which is given a score of 5. An averaged response of the questions across students gives some measure of teaching success – ranging in theory from 1.0 (perfect) through to 5.0 (which we really, really don’t want to see happening).

We’ve also got a general question – “Overall, this teacher was effective”. This is also given a score on the same scale.

A question that’s been raised is: Does the “Overall, this teacher was effective” score correlate well with the average of the others?

I’ve been teaching for several years now, and have a whole heap of data to draw from. So, I’ve been analyzing it (for 2008 onwards), and, in the interests of transparency, I’m happy for people to see it.  For myself, the question of “does a single ‘overall’ question get a similar mark to the averaged response of the other questions?” is a clear yes. The graph below shows the two scores plotted against each other, for different papers that I have taught. For some papers I’ve had a perfect score – 1.0 by every student for every question. For a couple scores have been dismall (above 2 on average):

Capture1.JPG

What does this mean? That’s a good question. Maybe it’s simply that a single question is as good as a multitude of questions if all we are going to do is to take the average of something. More interesting is to look at each question in turn. The questions start with “the teacher…” and then carry on as in the chart below, which shows the responses I’ve had averaged over papers and years.
Capture2.JPG
Remember, low scores are good. And what does this tell me? Probably not much that I don’t already know. For example, anecdotally at any rate, the question “The teacher gave me helpful feedback” is a question for which many lecturers get their poorest scores (highest numbers). This may well be because students don’t realize they are getting feedback. I have colleagues who, when they give oral feedback, will prefix what they say with “I am now giving you feedback on how you have done” so that it’s recognized for what it is.
So, another question. How much have I improved in recent years? Surely I am a better teacher now than what I was in 2008. I really believe that I am. So my scores should be heading towards 1.  Well, um, maybe not. Here they are. There are two lines – the blue line is the response to the question ‘Overall, this teacher was effective’, averaged over all the papers I took in a given year; the red line is the average of the other questions, averaged over all the papers. The red line closely tracks the blue – this shows the same effect as seen on the first graph. The two correlate well.
Capture3.JPG
So what’s happening. I did something well around 2010 but since then it’s gone backwards (with a bit of a gain this year – though not all of this year’s data has been returned to me yet). There are a couple of comments to make. In 2010 I started on a Post Graduate Certificate of Tertiary Teaching. I put a lot of effort into this. There were a couple of major tasks that I did that were targeted at implementing and assessing a teaching intervention to improve student performance. I finished the PGCert in 2011. That seems to have helped with my scores, in 2010 at least. A quick peruse of my CV, however, will tell you that this came at the expense of research outputs. Not a lot of research was going on in my office or lab during that time.  And what happened in 2012? I had a period of study leave (hooray for research outputs!) followed immediately by a period of parental leave. Unfortunately, I had the same amount of teaching to do and that got squashed into the rest of the year. Same amount of material, less time to do it, poorer student opinions. It seems a logical explanation anyway.
Does all this say anything about whether I am an effective teacher? Can one use a single number to describe it? These are questions that are being considered. Does my data help anyone to answer these questions? You decide.

September 23, 2013

teach creationism, undermine science

This is something I originally wrote for my ‘other’ blog.

Every now & then I’ve had someone say to me that there’s no harm in children hearing about ‘other ways of knowing’ about the world during their time at school, so why am I worried about creationism being delivered in the classroom? 

Well, first up, my concerns – & those of most of my colleagues – centre less on whether teaching creationism/intelligent design is bringing religion into the science classroom1, & more on how well such teaching prepares students for understanding and participating in biology in the 21st century. For example, if a school can make statements like this:

It is important that children and adults are clear that there is one universal truth. There can only be one truthful explanation for origins that means that all other explanations are wrong. Truth is truth. Biblical truth, scientific truth, mathematical truth, and historical truth are in harmony2.

and go on to list the “commonly accepted science we believe in”, then their students are not gaining any real understanding of the nature of science. And the statements regarding the science curriculum that I’ve linked to above indicate that it’s not just biology with which the school community has an issue. Physics, geology, cosmology: all have significant sections listed under “commonly accepted ‘science’ we do not believe in”3. (Did you notice the quote marks around that second mention of science?)

Science isn’t a belief system, & while people are entitled to their own opinions they are not entitled to their own facts. Any school science curriculum that picks & chooses what is taught on the basis of belief is delivering (to quote my friend David Winter) “a pathetic caricature of actual science, … undermin[ing] science as a method for understanding the world and leav[ing] the kids that learned it very poorly prepared to do biology in the 21st century.” Or indeed, to engage with pretty much any science, in terms of understanding how science is done and its relevance to our daily lives. And if we’re not concerned about that lack of science literacy, well, we should be.

 

although I do think this is a problem too.

2 with the subtext that the first ‘truth’ takes precedence.

Taken to its extreme, the belief system promoted in teaching creationism as science can result in statements such as this:

We believe Earth and its ecosystems – created by God’s intelligent design and infinite power and sustained by His faithful providence – are robust, resilient, self-regulating, and self-correcting, admirably suited for human flourishing…

…We deny that Earth and its ecosystems are the fragile and unstable products of chance, and particularly that Earth’s climate system is vulnerable to dangerous alteration because of miniscule changes in atmospheric chemistry.

This does not look like a recipe for good environmental management to me.

 

October 6, 2012

falling numbers in physics – what do teachers think?

A topic that gets quite a frequent airing in our tearoom is the decline in the number of students taking physics. This issue isn’t peculiar to my institution – a quick look at the literature indicates that it’s a global problem**. The question is, what can be done about this? It’s a question that Pey-Tee Oon & R.Subramaniam (2010) set out to answer.

They identified (from the science education literature) several reasons why students don’t like physics: it’s perceived as boring, with signficant mathematical demands; the passive teaching methods used in many classrooms are off-putting; and the curriculum is crowded. They also noted that teachers‘ perceptions  are important as they can affect students’ subject choices, and so they sought the help of physics teachers in Singaporean secondary schools, noting that

[physics] teachers are in a position to this debate [around declining interest in studying physics at university] as the intent to study or not to study physics is made by students at the school level – the influence of physics teachers on students taking physics cannot thus be underestimated.

In addition to collecting data on teaching experience and educational background, Oon & Subramaniam asked the teachers (all 166 of them) for suggestions on how this might be turned around:

Suggest one way in which more students can be encouraged to study physics at the university.

Several key points came up again and again in the teachers’ responses to that open-ended question: reviewing the current school physics curriculum, “making the teaching of physics fun”, improving graduates’ career prospects, publicising career opportunities, and running enrichment programs.

Now, the NZ physics curriculum was recently redeveloped, as part of the rewriting of the National Curriculum document; more recently, the Achievement Standards were rewritten to align them more closely with that document. So, if that redeveloped curriculum doesn’t “go beyond the classical topics and include more modern topics which are related to current applications” (& Marcus can probably give more informed comment on that than I can), then we may have missed the boat on that one. Of course, the teachers’ suggestion that more modern topics be included means that – when we do get the chance to spring-clean – that it may be necessary to drop some ‘traditional’ content. Otherwise we’d simply be cramming the curriculum ever fuller – and the perception of an overloaded curriculum can make the subject seem more difficult (a problem that Biology shares), and which other research has found to be a definite turn-off for students. There’s also the ‘fun’ aspect to consider – how do we address that?

It’s hard to see how the universities can improve physics graduates’ career prospects (something that probably needs a push at government level, if the government of the day is serious about the importance of studying the sciences) but we can certainly help to promote those options that are available. Among other suggestions, the teachers thought that the following could help: careers talks emphasising the value of physics, roadshows fronted by high-profile research scientists, better marketing by university physics departments, and enhanced career guidance (at both secondary and tertiary level). On the career front, Oon & Subramaniam point out that “Wall Street has a high concentration of physicists”, which suggests that career opportunities are more diverse than many students might think.

As for physics enrichment programs – again, a significant majority of the teachers surveyed felt that the following steps would be valuable:

  • creating opportunities for physics researchers and lecturers to go into schools to promote the subject;
  • running workshops in schools to raise awareness of the importance of this subject;
  • offering ‘popular’ physics seminars;
  • running on-campus physics enrichment camps;
  • and developing outreach programs supporting and promoting physics.

The teachers felt that university-level teaching also needs a review (ie, the problem of declining enrolments won’t be solved solely by changes in & support for physics teaching in schools):

One of the most striking findings from this study is the urge by teachers for a rebranding of the university physcis curriculum. Creating innovative interdisciplinary programs at the undergraduate level – for example, marrying physics with other disciplines (eg, finance, management etc) to meet the growing needs of current market demand, deserves consideration… For example, students can gain scientific training in physics and technical skills in finance if physics is integrated with finance… It is a win-win solution with minimum sacrifice… [that] will not only increase the employability of physics graduates but will also further the attractiveness of undergraduate physics programs.

The researchers note that such interdisciplinary programs are already being offered at some overseas instititutions, and certainly we are beginning to see an increasing emphasis here in New Zealand on the value of interdisciplinarity.

Oon & Subramaniam have definitely provided some food for thought. And given the nature of the problem, perhaps it’s time for physicists around New Zealand to work together to address it?

P-T Oon & R.Subramaniam (2010) Views of physics teachers on how to address the declining enrolment in physics at the university level. Research in Science and Technological Education 28(3): 277-289. http://dx.doi.org/10.1080/02635143.2010.501749

** Having said that, Michael Edmonds has just drawn my attention to this talk (shown on Youtube) by UK physicist, Professor Brian Cox.

June 12, 2011

effects of changing teaching styles on student learning

This is a repost of an item I’ve just written for my ‘other’ blog. It would be good to hear what others think of the teaching methods it examines :-)

I know I’m creeping into Marcus’s territory here but the research I’m going to discuss today would apply to pretty much any tertiary classroom :-)

This story got a bit of press about a month ago, with the Herald carrying a story under the headline: It’s not teacher, but method that matters. The news article went on to say that “students who had to engage interactively using the TV remote-like devices [aka 'clickers'] scored about twice as high on a test compared to those who heard the normal lecture.” However, as I suspected (being familiar with Carl Wieman’s work), there was a lot more to this intervention than using a bit of technology to ‘vote’ on quiz answers :-)

The methods traditionally used to teach at university (ie classes where the lecturer lectures & the students take notes) have been around for a very long time & they work for some – after all, people of my generation were taught that way at uni, & it’s not uncommon to hear statements like, we succeeded & today’s students can do it too. But transmission methods of teaching don’t reach a lot of students particularly well, nor do they really engage students with the subject as well as they might. (And goodness knows, we need to engage students with science!)

Wieman has already documented the impact (or lack of it) of traditional teaching methods on student learning in physics, but this paper (Deslauriers, Schelew & Wieman, 2011) goes further in examining the effect on student learning and engagement of changing teaching methods in one group of first-year students in a large undergraduate physics class. It can be hard to manage a class of 850 students, and so the lecturers at the University of British Columbia had split it into 3 groups, with each group taught by a different lecturer. While the lecturers prepared and taught the course material independently, exams, assignments and lab work were the same for all students.

Two of the three groups of students were involved in the week-long experiment; one continued to be taught by its regular, highly experienced instructor, while the other group was taught by a graduate student (Deslauriers) who’d been trained in ‘active learning’ techniques known to be effective in enhancing student learning. And ‘active learning’ wasn’t just using clickers: the ‘experimental’ group had: “pre-class reading assignments, pre-class reading quizzes [on-line, true/false quizzes based on that reading], in-class clicker questions…, small-group active learning tasks, and targeted in-class instructor feedback” (Deslauriers et al, 2011). Students worked on challenging questions and learned to practice scientific reasoning skills to solve problems, all with frequent feedback from the instructor. There was no formal lecturing at all; the pre-class reading was intended to cover the factual content normally delivered in class time. While the control group’s lecturer also used clickers, this was simply to gain class answers to quiz questions & wasn’t used along with student-student discussion, which was the case with the experimental class.

One reason often given by lecturers for not trying new things in the classroom is that the students might resist the changes. But you can avoid that. I know Marcus finds his students are very accepting of change if he explains in advance what he’s doing & how the innovation will hopefully enhance their learning, and Deslauriers, Schelew & Wieman did the same, explaining to students “why the material was being taught this way and how research showed that this approach would increase their learning.”

So, what was the effect of this classroom innovation? Well, it was assessed in several ways.

During the experiment, observers assessed how much the students seemed to be engaged in & involved with the learning process; they also counted heads to see what attendance was like. At the end of the intervention, learning was assessed using a multichoice test written by both instructors – prior to this, all learning materials were provided to both groups of students. And students were asked to complete a questionnaire looking at their attitudes to the intervention.

In both classes, only 55-57% of students actually attended class, prior to the experiment. Attendance remained at this level in the control group, but it shot up to 75% during the experimental teaching sessions. Engagement prior to the intervention was the same in both groups, 45%, but nearly doubled to 85% in the experimental cohort. Test scores taken in the week before the experiment were identical for the two groups (an average mark of 47%, which doesn’t sound very flash) – but the post-intervention test told a completely different story. The average score for the control group was 41% and for the experimental class it was 74% (with a standard deviation in each case of 13%). And the intervention was very well-received by students, with 77% feeling that they’d have learned more if the entire first-year course had been taught using interactive methods, rather than just that one week’s intervention.

Which is fairly compelling evidence that there really are better ways of teaching than the standard ‘transmission-of-knowledge’ lecture format. I try to use a lot of interactive techniques anyway – but reading this paper has cemented my intention to try something completely different next year, giving readings before a class on excretion (a subject which a large proportion of the class always seem to struggle with), and using the lecture time for questions, discussion, and probably a quiz that carries a small amount of credit, based on the readings they’ll have done. And of course, carefully explaining to the students about what I’m doing.

I’ll keep you posted :-)

Deslauriers L, Schelew E, & Wieman C (2011). Improved learning in a large-enrollment physics class. Science (New York, N.Y.), 332 (6031), 862-4 PMID: 21566198

December 13, 2010

Experimenting with Experimenting

Filed under: education, science teaching — Tags: , — Marcus Wilson @ 1:35 pm

This is a copy of a post on my blog PhysicsStop  http://sci.waikato.ac.nz/physicsstop

Last week I was at the Australian Institute of Physics congress, in Melbourne.

One of my talks concerned a piece of work I’d done with my second year experimental physics class this year. Before going to Melbourne, I gave the talk a trial run at the University of Waikato’s ‘celebrating teaching’ day. It provoked a few comments then, and a few more in Melbourne, so I thought I’d give a summary of it here.

I’ve been teaching experimental physics more or less for the whole time I’ve been at the university (my divine punishment for navigating my own undergraduate studies on the basis of finding the path with the least amount of practical work in it). I’ve noticed that few students do any planning before the lab. Some will turn up at the lab without even knowing what experiment they will be trying to do. So this year I’ve tried to turn this around.

The great thing about the theory of tertiary education is that it says that when there is a problem, the solution is often easy. And that is to pay attention to what you are assessing. “If you want to change student learning …. change the assessment” ( G. Brown, J. Bull and M. Pendlebury. Assessing Student Learning in Higher Education.   Routledge, London, New York (1997). )  The issue was, I think, that I was never actually getting the students to plan anything. They learn that they can get good marks without doing any preparation beforehand, because the instructions for the lab are pretty well provided to them.

So this year I’ve forced them to prepare for a couple of experiments, by removing the instructions. Instead, I gave them the task they had to do, and  let them get on with working out how it should be done, using what equipment, etc. Since we use some moderately complicated lab equipment, I chose to ‘pair-up’ experiments – one week to introduce them to the equipment, the next to give them an experiment to do (without instructions) that used that equipment. That way, learning to drive the equipment did not become a distraction.

For the most part (around three quarters) students overcame initial hesitations (horror?) and tackled this very well. Most enjoyed it, and thought the approach was beneficial. However, the other quarter really didn’t like it. Appraisal forms, a focus group, and casual conversations in the lab with the students tell me this.

I gave my talk and there was a fair bit of discussion afterwards. The audience (mostly filled with secondary teachers and tertiary teachers with a strong interest in education) thought that the way that these experiments were assessed needed very careful thought to get the most out of the students. Was I assessing the ‘planning’ task itself (and how?), the end results of the planning, or something else. I thought I was assessing ‘planning’, as well as how well the student carried out and documented  the experiment after the planning, but possibly it was not transparent enough to some of the students.  That’s worth working on for next year. 

Also, was I concerned that students might get their experiment ‘planned’ by someone else? E.g. consult another student in the group that had done this experiment in a previous week. Personally, this doesn’t bother me – in fact, I would encourage such consultation as it shows students are taking the task seriously. If a student finds it easier to learn from other students rather than from me, I have no problem with that. If the end result is that he or she learns (and I mean ‘learn’ not ‘parrot’) what I wish them to learn (which is more than just facts) then I have no problem with whatever route they take.

I was encouraged by a final comment by a lecturer who had done a similar thing with a large first-year class (in contrast to my small second-year class) and found very similar results – generally successful and well-liked by students, but with a significant minority that had strong views the other way.

November 17, 2010

Learning Outcomes

Filed under: university — Tags: , , — Marcus Wilson @ 4:31 pm

This is a copy of a post on my blog PhysicsStop.

This week I’ve had three fairly lively discussions about learning outcomes in our university papers.  (It’s well blogged already – e.g. here, but I’ll add some things to the mix). The concept is hardly new, but it is only just being given a really wide profile here at Waikato. Although many individual teachers, and many departments, have routinely written learning outcomes for their papers up to this point, it is now becoming mandatory. This is causing a bit of anxiety.

I honestly think that most of the adverse reaction is because it is seen as being another piece of administration work to do that has nothing to do with the task of actually teaching. In fact, it has everything to do with the task of teaching. Simply put, if you don’t know what the learning outcomes for your paper are, your teaching really has no purpose. 

So, for you non-teachers out there, what am I talking about?  A learning outcome for a course is a statement of what learning we want the students to have on going through this course, but given in such a way that it tells us how we would know (and the student would know) if the student has reached this learning. Biggs and Tang put it as “…a statement of how we would recognize if or how well students have learned what is intended they should learn.”  (Biggs, J. & Tang, C. (2007). Teaching for quality learning at university. Maidenhead, U.K.: Open University Press.)

This excludes words like ‘understand’ and ‘know'; they are too vague; my idea of understanding could be very different from a student’s.   Instead, how would my students be able to demonstrate that understanding?

So, an example. I would like my students to understand ‘skin depth’. That’s not a good learning outcome – it doesn’t indicate how that understanding could be demonstrated – so I ask myself how my students could demonstrate that they understand.  They could do that by calculating a value for skin depth for a particular situation and then interpreting what this means with regard to how far an electromagnetic wave will penetrate into a material.   So I could write “A student (who successfully completes this paper) will be able to calculate a value for skin depth in different electromagnetic scenarios and discuss critically the significance of their result in terms of the penetration of electromagnetic radiation.” 

There is content in the learning outcome, but it isn’t a list of content (That would be something like…   “a student will study….skin depth and penetration of electromagnetic radiation into matter.”)  Learning outcomes are suggestive of assessment tasks – they have to be – since learning needs to be demonstrated. So, for my example, I could assess my students’ ability to meet the learning outcome through having them do a calculation of skin depth, and asking them to comment on what their result implies.

Note that the assessment follows from the learning outcomes, not the other way around. That is, we don’t take the same old assessment that we’ve been using for the last ten years and just write some outcomes based on it. We write assessments based on what we want the students to learn (we all know students learn that which gets them through the assessments).   I used to moan that my students didn’t want to learn physics, they just wanted to pass the exam. The solution was obvious (so obvious that it only took me a few years to grasp…) – if the exam assesses the physics I want students to learn, they will learn it. And probably enjoy it a whole lot more as well.

November 9, 2010

moderation in all things

Over the last week I’ve been marking exams & the experience has led me to think (yet again) about the question of moderation. More precisely, of moderating exams – both the questions, & the marking itself. I’m beginning to think that this is a foreign concept for many teachers in undergraduate papers. (Graduate papers are a different kettle of fish – written exams are sent out so the marking can be moderated, and theses have external examiners as well as being marked in-house.)

Over the years that I’ve been teaching at the tertiary level I’ve seen some pretty awful practices: questions that are so poorly worded as to be quite ambiguous (if I’m course coordinator I take the liberty of rewriting these…); the same questions used year after year (so when papers are available in the library students catch on to this & can simply prepare & memorise answers); questions that require a single, rote-learned word or phrase to answer yet carry the same marks as a question that requires some thought and understanding to answer well… Which generates questions in response: why do people write the questions that they do? What sort of learning do these practices encourage in students? Why don’t we have some formalised system of moderating the papers prior to the exam? (You could probably add more but I don’t want this post to be toooo long!)

Now, here’s why I’m asking (& attempting to answer!) these questions – up until this year I was involved in setting assessment at a national level for our secondary school examinations, plus I’ve also taken done some work looking at Unit Standards at the tertiary level. One of the big differences between secondary and undergraduate exams is that the papers themselves are very closely moderated. Drafts are closely examined by a number of people & the examiner has to be able to justify why they’ve written a particular question in the way that they have. Ambiguities are removed, language is tightened up, examples are scrutinised for relevance and usefulness – and to be sure that they permit discrimination ie  the ability to distinguish betwwen the excellent, the middle-of-the-road, and the just-getting-by students. And the questions themselves are typically supported by some contextual information, the philosophy being that at least some of the time we should be looking at students’ understanding of a topic and not simply their ability to recall facts in a sort of soundbite way. (I sometimes wonder what students who’ve experienced that system think, when they come to uni & hit a different set of assessment practices…)

I suspect that the main reason that this isn’t done for many university exams is that those setting the papers haven’t had any training in doing it. Usually someone’s been hired onto the staff on the basis of their research experience; if they’ve taught before that’s fine but the focus has only recently begun to swing to teaching. And if they do have prior teaching experience, I’d be willing bet that ‘experience’ is the key word ie they’ve picked it up as they went along. There’ll be previous tests/exams to go on for examples in setting assessment & that’s probably what new appointees base their own assessment practices on. The trouble is that this isn’t the best way to develop good assessment practices. That, plus time pressures (multi-choice & one/a few word answers are faster to mark than any sort of extended or open-ended question), leads to overuse of some of the sorts of questions I was complaining about at the start.

Which sort of leads onto an understanding of ‘curriculum’. From tearoom chats, it seems to me that for a fair number of my colleagues see ‘curriculum’ as being ‘the facts that we teach’. In fact it’s so much more. I guess one way of bringing folks to realise this would be to say, OK, what attributes do you want our graduates to have, when they finish studying. (We’ve actually got a list of these on our ‘graduate profile’.) The ensuing list will include things like practical skills, communication skills, the ability to think critically etc. So the response to that is, how are students going to pick them up? For example, they’re never going to start thinking critically about the things they’re learning until they get a clear signal that this is valued (eg via exam questions that allow them to demonstrate that skill). Developing those attributes is also part of the curriculum, and helping students to develop them is also part of our job. How we teach is also part of it: we need to take care to model the skills and attributes that we wish to see in our students.

And it’s important to be aware of that, because teaching methods and assessment practices combine to shape student learning. Which is why that ‘same questions every year’ approach is such a concern. (People can – and do – complain that the NCEA, with its limited set of Achievement Standards that focus on only some areas of the curriculum, drives an undesirable focus on learning just what’s needed to pass the exam. But having questions that change little, if at all, from year to year does exactly the same. If students know that Jim Bloggs only ever asks a particular set of questions, of course they’re likely to focus on learning only what they need to answer them! They may even write answers ahead and commit them to memory. And if the questions encourage shallow, rote learning, then all the other interesting things Jim’s said during the year will fall by the way (and indeed, you have to wonder whether he has a set of learning outcomes in mind when writing his lectures & and his tests…). Surely we want more than this from our students?

So by now you’ll have guessed that I think we do need some form of moderation for undergraduate exam papers. It doesn’t need to be external – it could simply be a brief meeting of those involved in teaching, to go over the paper and be sure that the questions are going to elicit the sort of responses that we really value. Which, of course, fits within discussions around curriculum – which need to be beyond just the indivdual papers. Which is going to get quite involved… I think I’ll just go & have a nice lie-down while I contemplate this prospect in all its glorious complexity :)

June 10, 2010

What equation do I need?

Filed under: Uncategorized — Tags: , , — Marcus Wilson @ 3:19 pm

This is a copy of a recent post to my blog http://sci.waikato.ac.nz/physicsstop

With A-semester exams looming, the students here at Waikato are becoming a little more focused on their work. That inevitably means that I get more of them coming to me after a lecture, or knocking on the door of my office. And that is good.

One of the most common questions I get, usually in relation to an assignment, or a past exam paper, is ‘What equation do I need to solve this?’. I have slowly come to the conclusion (by slow, I mean six years) that when a student says this he actually means the following:

1. I don’t understand this

2. But I don’t mind that I don’t understand, I just need to know what to do to answer the question (and pass the assignment, exam etc.)

It’s the second one that is interesting. Any person can put numbers into an equation and come up with an answer, but it doesn’t necessarily add to their understanding. But unfortunately it can add to their ability to pass examinations, which is what drives students. And giving students that understanding  is part of what teaching a Bachelor of Science degree is about. Without it, a student cannot hope to apply your learning to new situations. Remember, that is what real scientists (e.g. physicists) do. No-one gets a science job that involves putting numbers into well established formulae. For example, our graduate profile for a BSc degree says a BSc graduate should have

 “Skills, knowledge and attributes needed to contribute directly and constructively to specific aspects of the building of a science based knowledge economy in New Zealand”
 
That is what I need to be building in my students – the ability to do just this. It is the scientist who will drive the economy forward and solve the world’s major problems. Will our BSc graduates be able to embark down this path? Sure, a lot of science learning occurs after a BSc, but a BSc shows that someone is reasonably compentent in their use of science, enough to contribute positively. How can you contribute positively if you don’t care that you don’t understand something. (point 2 above).
 
If we produce BSc graduates who are skilled in putting numbers into formulae and nothing else we are devaluing the BSc, denying the country good scientists (and therefore harming the economy) and short-changing the tax payer who gives the majority of the money to the universities to educate students. So when I get asked ‘What equation do I need’ I need to stop and think? – What does the student really want, and is it in his best interests (and the country’s) to give him that?

N.B. I could also say the point is that we, the teachers, need to set decent assignments, that mean stuffing-numbers-into-formulae isn’t sufficient to pass.

May 17, 2010

more on student engagement & active learning

This is a re-post from something I’ve just added to the Bioblog. While its focus is on engaging students with maths & physics, I believe that the ideas it offers can be equally well applied to teaching & learing in any of the sciences.

A colleague of mine (thanks, Jonathan!) sent me through the link to a talk by Dan Meyer, on teaching maths & physics. Dan’s talking about how to engage students with the subjects he teaches; how to put them on a level playing field – where they can all understand what a question’s about; how to get them talking about the question in a way that guides them to understanding how to get at the answer in a meaningful way. His aim: for all his students to become ‘patient problem-solvers’. His hope: for textbook authors to develop resources that support this aim instead of obfuscating it. Enjoy. (While Dan’s talk is aimed at high school teachers, I’d argue that it should also be compulsory viewing for the university staff who teach those teachers – where else are teachers-in-training to get this information from?)

March 30, 2010

engage them with interactive learning

After my lecture today one of the students said, “I like your lectures, they’re interactive. You make me want to come to class.”

I’m really rapt about this; I’ve worked hard over the last few years to make my lectures more interactive: creating an atmosphere where the students feel comfortable & confident about asking questions; where we can maybe begin a dialogue around the topic du jour; where we can spend a bit of time working around a concept. I guess this reflects my own teaching philosophy: I’ve never felt happy with the ‘standard’ model. (I can hear some of you saying, but what’s that? I guess you could say, the stereotypical, teacher-focused model of lecture delivery.) Way back when I was a trainee secondary teacher, my then-HoD was very big on me talking & the kids writing; we had to agree to disagree… Anyway, as time’s gone on my teaching’s become more & more ‘research-informed’, in the sense that I’ve increasingly delved into the education literature & applied various bits & pieces to what I do in the classroom. Anyway, to cut what could become a very long story a bit shorter, there’s good support for the interactive approach in the literature.

A recent, & prominent, proponent of getting students actively involved in what goes on in the lecture theatre is Nobel laureate Carl Wieman, who gave a couple of seminars at Auckland University & AUT late last year. His talks were titled Science education in the 21st century – using the insights of science to teach/learn science. I wasn’t lucky enough to go there, but the next best thing – the powerpoint presentation he used – is available on the Ako Aotearoa website. The theme of the presentation is that if we really want our students to learn about the nature of science, then we need to encourage them to think the way scientists do. This means giving them the opportunity to do experiments (& not the standard ‘recipe’-type experiments so common in undergraduate lab manuals, either), to ask questions, to make mistakes. Anyway, the presentation’s great & I thoroughly recommend having a look at it (hopefully that link will work for you).

But my active thinking about interactive learning goes back rather longer – I think I first really began to consciously focus on it when I was re-developing the labs for our second-year paper on evolution. Teaching evolution the ‘traditional’ way just doesn’t work; it does little or nothing to address strongly-held beliefs & misconceptions, mainly I think because the standard transmission model of giving them ‘the facts’ doesn’t let students engage with the subject in any meaningful way. A couple of papers by Passmore & Stewart (2000, 2002) helped me to focus my thoughts & I believe engendered some significant changes (for the better!) in the way our labs were run.

Last year I came across a paper by Craig Nelson, which presents strategies for actively involving students in class. While he talks primarily about teaching evolution, all the methods he describes would surely result in teaching any science more effectively: engaging students with the subject, helping them to gain critical thinking skills, & in the process confronting their misconceptions & comparing them with scientific conceptions in the discipline. (As part of this he gives a reasonably extensive list of resources and techniques to support all this.) Along the way Nelson refers to a 1998 paper by Richard Hake, who looked at the effectiveness of ‘traditional’ versus ‘interactive’ teaching methods in physics classes.

As the title of Hake’s paper suggests, his findings are based on large numbers of students, in classes on Newtonian mechanics. He begins by noting that previous studies had concluded that ‘traditional passive-student introductory physics courses, even those delivered by the most talented and popular instructors, imparted little conceptual understanding of [the subject].’ Worrying stuff. Hake defines interactive-engagement teaching methods as ‘designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors.’  He surveyed 62 introductory physics classes (over 6000 students), asking the course coordinators to send him pre- & post-test data for their classes, and asked, ‘how much of the total possible improvement in conceptual understanding did the class achieve?’ Interactive-engagement teaching was streets ahead in terms of its learning outcomes for students.

Nelson argues that such teaching is also far more effective in assisting students in coming to an understanding of the nature of science. The ‘problem’, of course, is that teaching for interactive engagement means that you have to drop some content out of your classes. It just isn’t physically possible to teach all the ‘stuff’ that you might get through in a ‘traditional’ lecture while also spending time on engaging students in the subject & working on the concepts they find difficult (or for which they hold significant misconceptions). In fact, Nelson comments that limiting content is perhaps the most diffiucult step to take on the journey to becoming a good teacher. He also cites a 1997 study that found that ‘ introductory major courses in science were regarded as too content crammed and of limited utility both by students who continued to major in science and by equally talented students who had originally planned to major in science but later changed their minds.’ This is a sobering statement – & perhaps it might be useful in countering the inevitable arguments that you can’t leave things out because this will leave students ill-prepared for their studies in subsequent years… But then, what do we as science educators really want? Students who understand what science is all about, & can apply that understanding to their learning, or students who can (or maybe can’t) regurgitate ‘facts’ on demand for a relatively short period of time but may struggle to see their relevance or importance? I know which one I go for.

Hake, R. Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics 66(1): 64-74

Nelson, C. (2008) Teaching evolution (and all of biology) more effectively: strategies for engagement, critical thinking, and confronting misconceptions. Integrated and Comparative Biology 48(2): 213-225

Passmore, C. & J. Stewart (2000) “A course in evolutionary biology: engaging students in the ‘practice’ of evolution.” National Centre for Improving Student Learning & Achievement in Mathematics and Science Research report #00-1: 1-11.

Passmore, C. & J. Stewart (2002) “A modelling approach to teaching evolutionary biology in high schools.” Journal of Research in Science Teaching 39(3): 185-204.

Older Posts »

The WordPress Classic Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 164 other followers