Talking Teaching

March 30, 2010

engage them with interactive learning

After my lecture today one of the students said, “I like your lectures, they’re interactive. You make me want to come to class.”

I’m really rapt about this; I’ve worked hard over the last few years to make my lectures more interactive: creating an atmosphere where the students feel comfortable & confident about asking questions; where we can maybe begin a dialogue around the topic du jour; where we can spend a bit of time working around a concept. I guess this reflects my own teaching philosophy: I’ve never felt happy with the ‘standard’ model. (I can hear some of you saying, but what’s that? I guess you could say, the stereotypical, teacher-focused model of lecture delivery.) Way back when I was a trainee secondary teacher, my then-HoD was very big on me talking & the kids writing; we had to agree to disagree… Anyway, as time’s gone on my teaching’s become more & more ‘research-informed’, in the sense that I’ve increasingly delved into the education literature & applied various bits & pieces to what I do in the classroom. Anyway, to cut what could become a very long story a bit shorter, there’s good support for the interactive approach in the literature.

A recent, & prominent, proponent of getting students actively involved in what goes on in the lecture theatre is Nobel laureate Carl Wieman, who gave a couple of seminars at Auckland University & AUT late last year. His talks were titled Science education in the 21st century – using the insights of science to teach/learn science. I wasn’t lucky enough to go there, but the next best thing – the powerpoint presentation he used – is available on the Ako Aotearoa website. The theme of the presentation is that if we really want our students to learn about the nature of science, then we need to encourage them to think the way scientists do. This means giving them the opportunity to do experiments (& not the standard ‘recipe’-type experiments so common in undergraduate lab manuals, either), to ask questions, to make mistakes. Anyway, the presentation’s great & I thoroughly recommend having a look at it (hopefully that link will work for you).

But my active thinking about interactive learning goes back rather longer – I think I first really began to consciously focus on it when I was re-developing the labs for our second-year paper on evolution. Teaching evolution the ‘traditional’ way just doesn’t work; it does little or nothing to address strongly-held beliefs & misconceptions, mainly I think because the standard transmission model of giving them ‘the facts’ doesn’t let students engage with the subject in any meaningful way. A couple of papers by Passmore & Stewart (2000, 2002) helped me to focus my thoughts & I believe engendered some significant changes (for the better!) in the way our labs were run.

Last year I came across a paper by Craig Nelson, which presents strategies for actively involving students in class. While he talks primarily about teaching evolution, all the methods he describes would surely result in teaching any science more effectively: engaging students with the subject, helping them to gain critical thinking skills, & in the process confronting their misconceptions & comparing them with scientific conceptions in the discipline. (As part of this he gives a reasonably extensive list of resources and techniques to support all this.) Along the way Nelson refers to a 1998 paper by Richard Hake, who looked at the effectiveness of ‘traditional’ versus ‘interactive’ teaching methods in physics classes.

As the title of Hake’s paper suggests, his findings are based on large numbers of students, in classes on Newtonian mechanics. He begins by noting that previous studies had concluded that ‘traditional passive-student introductory physics courses, even those delivered by the most talented and popular instructors, imparted little conceptual understanding of [the subject].’ Worrying stuff. Hake defines interactive-engagement teaching methods as ‘designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors.’  He surveyed 62 introductory physics classes (over 6000 students), asking the course coordinators to send him pre- & post-test data for their classes, and asked, ‘how much of the total possible improvement in conceptual understanding did the class achieve?’ Interactive-engagement teaching was streets ahead in terms of its learning outcomes for students.

Nelson argues that such teaching is also far more effective in assisting students in coming to an understanding of the nature of science. The ‘problem’, of course, is that teaching for interactive engagement means that you have to drop some content out of your classes. It just isn’t physically possible to teach all the ‘stuff’ that you might get through in a ‘traditional’ lecture while also spending time on engaging students in the subject & working on the concepts they find difficult (or for which they hold significant misconceptions). In fact, Nelson comments that limiting content is perhaps the most diffiucult step to take on the journey to becoming a good teacher. He also cites a 1997 study that found that ‘ introductory major courses in science were regarded as too content crammed and of limited utility both by students who continued to major in science and by equally talented students who had originally planned to major in science but later changed their minds.’ This is a sobering statement – & perhaps it might be useful in countering the inevitable arguments that you can’t leave things out because this will leave students ill-prepared for their studies in subsequent years… But then, what do we as science educators really want? Students who understand what science is all about, & can apply that understanding to their learning, or students who can (or maybe can’t) regurgitate ‘facts’ on demand for a relatively short period of time but may struggle to see their relevance or importance? I know which one I go for.

Hake, R. Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics 66(1): 64-74

Nelson, C. (2008) Teaching evolution (and all of biology) more effectively: strategies for engagement, critical thinking, and confronting misconceptions. Integrated and Comparative Biology 48(2): 213-225

Passmore, C. & J. Stewart (2000) “A course in evolutionary biology: engaging students in the ‘practice’ of evolution.” National Centre for Improving Student Learning & Achievement in Mathematics and Science Research report #00-1: 1-11.

Passmore, C. & J. Stewart (2002) “A modelling approach to teaching evolutionary biology in high schools.” Journal of Research in Science Teaching 39(3): 185-204.

March 24, 2010

how good are we at teaching the nature of science?

This is a question to which I don’t really have an answer – hopefully it will provoke a bit of a discussion :-)

It arises from an on-line chat I had today with a friend & colleague who’s a secondary teacher. We’re both big on teaching about the nature of science (as you might have gathered, I like the narrative approach to this). And we’ve both followed with interest the comments thread associated with a guest post on SciBlogsNZ. The post was written by Dr Nicky Turner, of the Immunisation Advisory Centre, & it attracted an outpouring of comment centred on Gardasil, the vaccine against genital herpes that was recently added to the NZ vaccination schedule.

Much of the comment was anti- this vaccine (& to some degree other vaccines as well). At least some of it was spurred by what the commenters perceive as significant health problems suffered by their daughters as a direct result of the vaccination. (I hasten to say that I have a huge amount of sympathy for these parents & I can understand their need to find something to blame for their daughters’ ill-health.) However, some of the comments showed that those writing didn’t really have a strong grasp of the science underlying vaccines, or of things like the VAERS database & its equivalent in New Zealand.

My friend said, ‘as I read the comments from the ‘anti’ people I kept thinking they just don’t understand about how science works.  The gulf between science and Joe Public  is as huge as ever and I wonder if schools really are doing anything to change that?’

Personally I’m not sure that they are. This may change with the implementation of the new science curriculum – but only if teachers are properly resourced & supported to do so. But I may be being overly cynical about this, and I’d be really interested to hear what others have to say on the matter. So please – do feel free to chip in & tell us what you think. It would be great to get a dialogue going around teaching the nature of science in our schools!

Mind games for physicists

Filed under: Uncategorized — Tags: , — Marcus Wilson @ 4:54 pm

This is a copy of a post on my blog  It’s talking about physics, but I suspect that there is a lot of carry over to other areas of science….

Here’s a gem of a paper from Jonathan Tuminaro and Edward Redish.

The authors have carried out a detailed analysis of the discussions a group of physics students had when solving a particular problem. They’ve worked hard (the researchers, as well as the students) – the first case study they chose was a conversation 45 minutes long.

While tackling the problem, the students have ‘played’ several epistemic games – or, put more simply, have used different ways of thinking. There are six different games identified – corresponding to six distinctly different ways of thinking about the same problem.  Students don’t stick to one game though, they can flip between several. Very quickly, they are:

1. Mapping meaning to mathematics.   This is where the students work out what is going on (or what they think is going on) and put it into a mathematical form (e.g. to make an equation) – then the equation can be used for things.

2. Mapping mathematics to meaning.  Kind of the reverse of (1). Here the students start with a mathematical expression they know, and work out what it might mean in practice.

3. Physical Mechanism Game. In this game the students try to draw sense from their own intuition of the physical principles involved.

4. Pictorial Analysis Game.  Here diagrams are used as the major step.

5. Recursive Plug-and-Chug. I’ll quote from the authors here, because they do it so well: “[here the students] plug quantities into physics equations and churn out numeric answers, without conceptually understanding the physical implications of their calculations.”   (The emphasis is mine.)

6. Transliteration to mathematics. Here the students draw from a worked example of another, similar problem, and try to map quantities from problem A onto quantities of problem B.

Now, I ask myself, which methods do I see my students doing in my classes, and using in the assignments I set.  I have to say that in many cases I’m not sure – and probably my teaching is the worse for it.  I can say which games I would like to see students using (1 to 4) and which games would make me shudder (5 and 6 – in which the students develop no physical understanding about what is happening),  but do I know?  There are certainly ways of getting students to use the ‘right’ games, notably setting the right kind of assessment questions.

OK, so which games do I play most in my research?  I’d say probably 1, 2 and 3.  I do a lot of physical modelling, in which I represent a problem (e.g. how do neurons in the brain behave in a certain environment) through a series of equations (game 1) and then work out the implications of those equations (game 3). I also draw a lot from my intuition about physics (e.g. if you increase the pressure across a pipe, you’ll get more flow, regardless of what shape the pipe is) – that’s game 3.

Finally, those physicists among you might like to know what problem the students had to solve. It was this. Three electrical charges, q1, q2 and q3 are arranged in a line, with equal distance between q1 & q2, and q2 & q3.  Charges q1 and q2 are held fixed. Charge q3 is not fixed in place, but is held in a constant position by the electrostatic forces present.  If q2 has the charge Q, what charge does q1 have?

    o    q1                   o   q2                    o  q3

The authors say that most experienced physics teachers can solve this problem in less than a minute. I solved it in about five seconds, using game 3, with a tiny smattering of game 2.  The students concerned (3 of them together) took 45 minutes – this massive difference is perhaps interesting in its own right.

Reference (it’s well worth a read if you teach physics at any level):  Tuminaro, J. and Redish, E. F. (2007) Elements of a cognitive model of physics problem solving: Epistemic games. Physical review special topics – Physics education research (3) 020101.   DOI: 10.1103/PhysRevSTPER.3.020101.

March 19, 2010

Mobile phone physics

Filed under: university — Tags: , , — Marcus Wilson @ 3:51 pm

This post is a copy from my (Marcus Wilson’s) blog physicsstop. ( )

Just occasionally, I have a crazy thought regarding a physics demonstration.   This is one that I’m thinking about inflicting on my third year electromagnetism class.  

We’ve been discussing the way electromagnetic waves travel (or rather, do not travel) through electrical conductors. Basically, conductors allow electric currents to flow in response to an applied electric field (in simple terms this just means applying a voltage). Electromagnetic waves such as visible light, radio and X-rays contain electric fields, so when one hits a conductor electric currents flow. Flowing currents heat up a material. Where does this heat energy come from? From the wave. In other words, conductors suck out energy from an electomagnetic wave, and, broadly speaking,  the wave can only penetrate so far into the conductor. This distance is what’s known as the ‘skin depth’.

Skin depth depends importantly on two things – the conductivity of the material and the frequency of the wave. The higher the conductivity, or the higher the frequency, the smaller the skin depth.  Thus, if you consider the waves to/from a mobile phone (frequency of around 1000 MHz) travelling through aluminium (a very good conductor) the skin depth turns out to be small indeed – microns in size.  That means wrapping a phone in aluminium foil will prevent it from picking up a signal. I’ve already shown this in class.

But – here’s the crazy thought – what about water?   Distilled water is a pretty non-conductive, but what comes out of the tap is loaded with dissolved salts and has a moderate conductivity, albeit several orders of magnitude below aluminium foil.   What’s its skin depth for  mobile phone frequencies?  I’ve done some quick back-of-the-envelope, and I reckon something of the order  few centimetres.  So….I predict that if we put the phone in just a few millimetres of water (YES, it needs waterproofing first!) it will still receive a signal, but suspend it in the middle of a swimming pool and there’s going to be no reception at all.

 I reckon that getting my class to estimate how much water would be required to shut out the signal, and then design an experiment (that might or might not need to include ‘borrowing’ the university swimming pool for a short while) would be a great way to get them to think about the various issues themselves.  There’s plenty of literature to back up that assertion – e.g. Etkina et al., American Journal of Physics 74(11), p979  (2006). The best thing is that I can’t be tempted to tell them the answer –  because I don’t know it – I haven’t done the experiment myself. Though I have found this YouTube…

March 14, 2010

how do we teach students to question what we say?

I’ve just been reading a post by Tim Kreider, over at Science-Based Medicine. Tim’s talking about the learning experiences of medical students, but a particular phrase caught my eye. I”m reproducing it here because I think it can be applied much more widely: students are in the habit of transcribing and commiting to memory everything uttered by the professors who grade them.

I’ve seen this happen myself. I remember talking with a class about fungi & saying that while most fungi are saprophytes (consuming dead material), some are predatory. And they all (well, all those I could see, anyway) wrote this down unquestioningly. ‘Hang on a minute,’ I said; ‘does this sound likely to you?’ They agreed that no, it didn’t really, it didn’t match with what they already knew about fungi. ‘Well then,’ I said; ‘why didn’t any of you call me on it?’ ‘Because,’ they said, ‘you wouldn’t tell us anything incorrect, would you?’ Which showed a touching faith but also a worrying lack of willingness to question things that didn’t sound right.

(Just as an aside: This was amply demonstrated one year when my class was sitting a lab test. One of the questions asked students to label a section of plant tissue, selecting their labels from a list that I provided. It happened to be April 1st – so I included the word ‘aardvark’ in that list. Rather worringly, about 30% of the class used it for a label – & when asked why they said well, it didn’t sound right, but they just knew I wouldn’t have used a word that didn’t belong… And not one of them questioned it at the time.)

Now, in his SBM post, Tim makes the point that med students in their pre-clinical training have to learn so much content that there isn’t a lot of room for rigorous skepticism (but make no mistake, he’s still arguing of the need to teach critical thinking). And I agree, there is factual content that I want my students to be able to remember (& my colleagues teaching at 2nd-year would like it too!) But at some point we must surely also want our students to develop a healthy skepticism: the ability to think critically about what they’re hearing & learning.  And I certainly don’t like the idea that my science students might regard me as infallible. Not least because that’s not a particularly good model for what science is like. They need to know that scientists can & and do make mistakes, get things wrong, interpret data in ways that subsequently (in the light of further data) turn out to be inaccurate. And they need to feel confident that it’s OK to ask questions. The thing is, how best to get this across?

Speaking for myself, I’m a firm believer in modelling this for my students. If I’m asked a question to which I don’t know the answer, I’ll tell them so, up front. But then I’ll say, but I can hypothesise about this – here’s what the answer might be, & here’s my evidence for thinking this. (If the classroom has web access – & most of ours do these days, we’ll often go on to check what I’ve said on the spot.) If it turns out that I’m wrong (which happens quite a lot), then that’s fine, & we’ve all learned something new.

Plus, I actively encourage questioning during my lectures. (Pop quizzes & concept maps are good for encouraging the sort of conversations that lead to this.) Sure, I mightn’t get through as much content as if I didn’t do this, but the students’ learning experience is surely going to be a better one if they can follow up on things on the spot. And hopefully they learn from this that it really is OK to ask questions :-)

And – I’m all for telling stories. How better to help students learn about the nature of science than to use a narrative approach that lets them see how scientists viewed the world at some past point in time, & how science has led to a change in – or a reaffirmation of – that perspective?

March 12, 2010

an open thread

Filed under: education — alison @ 8:44 am

We thought it might be a quite useful to have an open thread where people can come in & start a conversation about something that’s not directly related to a post here. So – here it is, over to you.

March 10, 2010

completions, teaching & learning, & funding

I remember the issue of tying some tertiary funding to student completion & retention rates first coming up about 12 years ago, and various governments have made noises about it ever since. Now I see that this is going ahead, with 5-10% of the funding given to tertiary education providers tied to retention & student performance.

Twelve years back this was being promoted, at least in part, as being a way of rewarding teaching excellence. I had real reservations about that as it seemed to me that using completion rates to measure teaching quality was a very blunt instrument indeed. This time round it’s ‘educational performance’, which I suppose is rather more holistic, but those concerns remain.

Various commentaries in the media have highlighted concerns around the ability to access tertiary education for students in groups that might be seen as more likely to fail: Maori, Pasifika, older students. An institution worried about its figures might simply decline to enrol them in the first place. They’ve also suggested that educators might come under pressure to pass students regardless of whether that pass is deserved. But so far I haven’t seen anything looking in detail at the issue of just why students fail to complete either papers or entire qualifications. This surprises me since I know of at least one fairly major government-funded research study that looked at this very question, surveying students from 7 NZ tertiary institutions  (Zepke et al. 2005).

The students told Zepke & his colleagues that personal problems were a key factor in affecting any decision to withdraw or partially withdraw from their studies: family issues, difficulties balancing work & study (or sport & study), personal health problems. This is something that institutions do not have the ability to control and may not have the ability (or the funding) to deal with. Yes, students also commented that workloads, and ways to manage them, were also high on their list of concerns, and there may well be things we (as educators) can do about that. (Certainly the Student Learning Support staff here provide students with a lot of help and advice centred on managing their studies.) And we can improve on the advice that we give about programs of study, given that students reported that feeling they might have chosen the wrong course also influenced their decisions to stick with study or drop out of it. And of course teachers continue to review their practices & reflect on how well what they do affects their students’ learning.

But there are, when it comes to the crunch, 2 sets of players in the teaching & learning world. And I have to say, if my teaching is in part going to be judged on how many students complete my course – then I’d quite like the ability to withdraw those students who enrol in my papers but never ever do anything else. They don’t come to labs, they don’t complete assignments, they don’t sit tests, they don’t come to exams. Every year there are 10-20 such students in a class of 200. They drag down my completion rates. I try to contact them, but rarely get any response. And I’d really rather they weren’t there on my class lists at all …

Nick Zepke, Linda Leach, Tom Prebble, Alison Campbell, David Coltman, Bonnie Dewart, & Maree Gibson (2005) Improving tertiary student outcomes in the first year of study. NZ Council for Education Research report

March 9, 2010

thoughts on assessment

From time to time my colleagues will argue that we ‘over-assess’ our students (bear in mind that this is the uni setting I’m talking about). I’m never quite sure what ‘over-assess’ actually means. The focus is generally on the number of tests & other assessment items that we require students to sit/submit, & the argument is generally that there should be less of them.

I guess it depends on the type of assessment, & its purpose…

Now, in my experience – & please feel free to disagree here! – assessment in science classes tends to be ‘summative’: it’s looking for a snapshot of what students ‘know’ at a particular point in time (a test, or a final exam). So the ‘over’ proponents would argue that we can get away with fewer tests & still be able to get a meaningful idea of what students actually know. Part of my objection to that rests on the nature of the assessment items, in terms of what the students are being asked. And another part is related to how we communicate with the students about their answers & their outcomes.

OK, objection part I: what do our tests & exams actually ask of the students? A lot of the time, what I see is questions based on simple recall: label this part of a graph/this structure in a dissection; give the definition of ‘x’ or the meaning of ‘y’; use this formula to calculate ‘z’. Now, I’ll be the first to admit that I use such questions myself, some of the time, and that it may well be desirable to know that students can identify this or calculate that. Where I think we can have a problem is if this is the only/most common form of assessment question. Why? Because it encourages rote learning, & that sort of learning doesn’t have a long shelf-life in the old grey matter, nor does it encourage in-depth understanding of the subject they’re studying. My own feeling is that tests/exams need to have a mix of questions – some that do let you see if students can recall basic facts & concepts (& which incidentally cater for those students who tend to rely on rote-learning) & some that test understanding & the ability to see the big picture. And – students should know in advance what your assessment items look like, because knowing this does actually have an impact on the learning styles that students adopt.

Not quite sure how the ‘less is more’ approach fits here – you could argue for shorter tests, more often, & in fact that’s what our tutor is doing in assessing lab work: 5-minute mastery tests at the start of each week’s class, on the work they covered in the previous week. This should let us pick up students who are struggling, very quickly indeed, & it also gives students rapid feedback on their understanding of the lab content.

The other concern (well, if I think about it I could probably come up with more, but this will do for tonight!) is the sort of feedback that our assessment provides. Is it summative, where students simply get a mark (which lets us grade them – but which can also, if done early enough, be a way of identifying those at risk of failing)? Or is it formative, where they get constructive feedback on how their learning (or their ability to demonstrate it) is progressing?  Which is most helpful, to the learner & to the teacher? Personally I go for a combination of the two, and as time’s gone by I’ve begun using a wider range of formative assessment methods – pop quizzes in lectures, concept mapping in tutorials, alongside the more traditional (& more time-consuming) written comments on essays. The first 2 give both me & the students instant feedback on what they do & don’t understand, which can only be good for their learning & also for my teaching.

Incidentally, in response to the less-is-more approach, a couple of years ago we cut the number of tests in one of the first-year bio papers from 3 to 2. The next semester we went back to 3. Why? Because the tutor & I felt that by the time the first test rolled round, it was almost too late to begin meaningful interventions for some students. (It also meant that if someone missed one test, it didn’t leave me much to go on in calculating an aegrotat.) And because the students themselves, when polled at the end of the semester, indicated an overwhelming preference for the larger number of tests – they felt it was fairer & spread their workload more evenly.

I think I will continue to reject ‘less is more’ for the time being.

March 5, 2010

the sound of my own voice

Filed under: science teaching — Tags: , , — alison @ 2:49 pm

Every now & then I take a timid step into the unknown. ‘Unknown’ for me, that is. Today, the step was to record my lectures using ‘coursecast’ (Panopto) software, so that my students can view them again (& again & again… heck, I hope not – if they need several viewings then my explanations etc probably aren’t up to scratch!)

I must say, I was quite pleased with the result. Students viewing a recording get to see a smallish ‘live action’ screen that shows me plus the backs of a few rows of heads; a list of slides that allows them to jump from one to the other; the current slide plus thumbnails of the others, which allow allows them to jump between slides; & if enabled, a view of the computer screen is also available. The quality of the recording is good – but I must say, it’s quite strange listening to your own voice! Sounds different when it’s actually echoing around in your own head.

I didn’t use the ‘screen capture’ option & now I think I should have done, & will in future. This is because I routinely ‘draw’ on my powerpoint slides – I always use the arrow (cursor) as a pointer, because it’s much easier on the eye than a jiggly laser pointer spot, but you also have the option of using it as a pen. This lets me cross things out (if I’ve made a typo, for example!), underline for emphasis, & draw scribble a diagram for explanation. I find it really helps the classroom dynamic as well – lets the students see I don’t mind who knows that I’ve made a mistake, plus it can inject a bit of humour. Anyway, those scribbles don’t show on the slide capture function, so I’ll enable ‘screen capture’ from now on.

Now, here’s the philosophical musings… It could be argued – & I suspect many of my colleagues will do this – that if my lectures are recorded & available after the event, that the students won’t bother coming at all. (They said this when I started putting all my ppts on moodle ahead of lectures, & I didn’t notice any obvious drop in numbers!) Personally, I don’t think it’s all that likely – there’s a lot of interplay in my lectures that the recording won’t pick up, because it’s student-based & I’m the only one wired for sound. And there are a lot of benefits to be had from doing this sort of thing. Students who are ill won’t have to rely on study guides or their friends’ notes but can still see the performance. And students who didn’t catch a comment, or who need to hear something again, can replay it. Similarly, if they didn’t understand the first time, they’ve got the opportunity to hear things again. It’s got to be good for students.

Good for me, too: I get to see (in miniature) & hear what the students see & hear, so if I’ve got any irritating mannerisms etc then I can identify them & – I hope! – work to correct things.

So hopefully it’s a win-win for everyone :-)

March 3, 2010

breadth vs depth

Filed under: science teaching — Tags: , , , — alison @ 9:49 pm

This is another re-post from the Bioblog. It was written to draw attention to an issue that I feel rather strongly about; that is, the conflict between teaching a few key concepts in depth, encouraging student understanding, and teaching a whole heap of material in the same period of time, which often sees students resorting to rote learning as a coping mechanism. I’d be interested to hear what you think of this one.

One of the conflicts faced by probably every classroom teacher is the one between the amount of material one has to teach (& the students to learn about) and the time available. I face it myself: huge (though also very good) textbook, requests from my colleagues to make sure that the first-year course adequately prepares students to take second-year papers, students coming in with a range of backgrounds & prior experiences of biology – & a 12-week semester in which to accommodate it all. Reflecting on my teaching practice over the last several years in our A semester intro bio paper, I think I probably teach less content, less detail, than when I started in this particular paper, but have more of a focus on identifying (& dealing in depth with) big, or key, ideas. As you’ve probably guessed from my posts, I encourage my students to think critically about what they’re learning, and to gain an understanding of how those ideas & concepts relate to each other. And of course I’d like all my students to view science as fascinating, fun, useful, & relevant to them in their daily lives…

So of course I was interested in a paper by Marc Schwartz & his colleagues, entitled Depth versus breadth: how content coverage in high school science courses relates to later success in college science coursework. How would their findings relate to my own teaching approach? (And, is what I do in the classroom supported by empirical data, or is it a case of intuition & experience leading me up the garden path?) In a survey of 8310 students taking first-year biology, chemistry, & physics courses, the authors fround that students who said they’d spent at least a month studying at least one major topic in depth, while at high school, received higher grades in their university science classes than students who hadn’t done (or didn’t remember!) doing any in-depth work. Interesting! The team also looked at the outcomes for students who reported having broad high school classes that covered something on all major topics. The results here were equally interesting – these students didn’t seem to have any advantage over students who hadn’t ‘studied everything’ in physics & chemistry, & were at ‘a significant disadvantage in biology’.

Presumably students spending a month or so on a single topic can really come to a good understanding of the area, mastering key concepts & able to understand how it all fits together. Taking a ‘deep learning’ approach, in other words.  In classrooms where there’s a drive to cover everything, it could well be that many students cope with the huge volume of material by using learning approaches that could be called ‘shallow’ – rote learning techniques, for example, that don’t really aid a thorough understanding. (All this, of course, assumes that the tertiary assessment practices these students are encountering reward those taking the ‘deep’ learning approach to their studies…) And those with the learning skills developed by taking a deep learning approach to one topic can then apply those to the new material they learn in the following year, enhancing their learning outccomes there as well.

I guess my fondness for trying to focus on teaching methods that encourage ‘deep’ learning reflects my own philosophy that there is simply too much information potentially available. In ‘the old days’ it was probably quite possible to teach a subject such as any one of the sciences in fairly comprehensive breadth. But since then, particularly with the advent of modern technology, there’s been something of an explosion of knowledge. I know some of my students are quite daunted by the sheer size (& volume of content) of our textbook (the excellent Campbell [no relation!} & Reece). For me, & my colleagues in first-year biology, the question is, how to include it all? And,  should we cover it all? Schwartz et al quote another author as saying that ‘[to] be successful [in their learning], students need carefully structured experiences, scaffolded support from teachers, and opportunities for sustained engagement with the same set of ideas over extended periods of time.” That ‘sustained engagement’ part is the tricky one, when you’re teaching a ‘service’ course that’s intended to prepare students for a range of paper options in their next year of study. I try to manage it by identifying common themes (eg the need for gas exchange, internal transport, energy) that apply across the living world, & tying things to those, so the themes recur even if the material attached to them is novel. But it’s a testing balancing act, nonetheless… Nice to know that at least one research paper suggests that I’m on the right track :-)

M.S.Schwartz, P.M.Sadler, G.Sonnert & R.H.Tai (2009) Depth versus breadth: how content coverage in high-school science courses relates to later success in college science coursework. Science Education 93: 798-826 doi 10.1002/sce.20328

Older Posts »

Blog at