Yesterday we had another meeting of the uni’s Teaching Network – an informal meeting of staff from across the campus to talk about some teaching-related issue that interests us. (Lunch is provided – an added extra!) Today’s topic was ‘the first-year experience’, beginning with a discussion of the NCEA. The ‘National Certificate of Educational Achievement’ is the qualification gained by most state (public) school students in New Zealand, although some may opt for the Cambridge International Exams or the International Baccalaureate. It was a good choice of topic as many (most?) university lecturers are pretty much unaware of the intricacies of the NCEA, & most people there probably went away with a better understanding of the whole system.
Then the discussion got really interesting. How much attention should we pay to the NCEA ie to systems of assessment used in schools – & also to their curriculum? How helpful is it for our students if we use those systems (or something similar)? Can we change the way we teach & assess our students, given that there’s likely to be more experiential learning – & assessment of that learning – going on in your average secondary school classroom than you’d see in many first-year lecture rooms & laboratories? And should we? Shouldn’t students simply ‘harden up’ & get used to the fact that universities do things differently? Are we doing our students a disservice if we change what we do to accommodate their different prior experiences? Would this constitute ‘dumbing down’ the university experience? Can we improve our use of learning objectives? How good are we at teaching our students the skills & attributes of critical thinking?
I can’t pretend that I know all the answers to those questions, but I’ll certainly do my best to give you my perspective!
On the issue of curriculum awareness, I would argue that all first-year university lecturers (at the very least) should be aware of the curriculum that students taking their subjects would have followed in their senior secondary school years. How else can we be sure that we’re linking in with students’ prior learning; that we’re not hitting them with ideas they’ve never heard of (& so have nothing to build that new information onto), or with material that has already been fairly thoroughly covered at school? Neither of those possibilities strikes me as a good thing as there’s the potential for students to turn off from what’s going on in our classrooms – either they feel that they’ve heard it before & get bored (& maybe don’t recognise how what we’re doing might be building on that earlier knowledge), or they don’t understand & give up.
I’ve already written a bit on the nature of assessment; here, for example. Personally I believe that universities can learn a bit from looking at assessment systems like the one used in NCEA – not to say that it’s all sweetness & light, but it is definitely a more transparent system than is typical in many university papers. And I like the fact that students are scaffolded into the question & that examiners must think carefully about exactly what they’re testing (& I speak from experience here). For example, if the intention is to examine students’ ability to interpret information, then in NCEA you give them that information, rather than expect them to regurgitate the facts before interpreting them. I’m not arguing that we shouldn’t expect students to learn key facts & concepts! Only that perhaps we need to think more carefully about what & how we’re assessing.
At yesterday’s discussion someone raised the point that we don’t have funding to change what we do, that in a financially-tight environment exams & multichoice tests are the best way to handle the demands placed on the lecturer by the need to assess learning – in large classes in particular – & that change would require money we don’t have. I’m not sure that I agree with that. Some of the changes we’re instituting in first-year biology – peer-marking of essay drafts, for example – certainly take time to set up & you need to educate the students about what you’re doing & why. And of course, there are no actual marks in the mark sheet. But once these changes are in place then – hopefully! – student learning around what’s actually expected of them in tasks such as writing an essay should improve, & from my point of view the end-product should be of higher quality & perhaps even require less actual time from me in marking & giving feedback :-) So I would argue that it is possible to change the way we teach & assess even in large university classes, in a way that links with students’ earlier experiences of these processes & enhances their enculturation into the academic world of the university.
The counter-argument to this is that we would simply be perpetuating the ‘spoon-feeding’ that may have gone on in schools. Should students just ‘get on with it’, harden up, & accept that universities do things differently? (Secondary teachers: please don’t shoot me! I don’t actually agree with that; I’m being rhetorical here :-) Well, not really, this argument was raised yesterday.) This would perhaps be true – if universities can demonstrate that their way of ‘doing things’ produces better outcomes, in terms of student learning, than the alternatives I’m discussing. But I’m not 100% convinced that we can. For example, if you look at your average ‘graduate profile’ – the statement of what graduates are capable of – you’ll find it says things like ‘will demonstrate critical thinking skills, be an independent learner, be capable of communicating in a range of different modalities’, & so on… But – how do we know this? Do we give students the opportunity to learn & practice these skills, & how do we know that they possess them at the end? Traditional, teacher-down, lecture- & ‘cookbook’ lab-based teaching isn’t going to deliver on these things.
The other thing here, of course, is that the nature of the student body has changed over time. When I was a student, mumblety-years ago, only a rather small proportion of senior school students went on to university. We were taught by that traditional method, by & large we survived it, & (mostly) we did well. It worked for us, why won’t it work for ‘them’… But this ignores the fact that – whether we like it or not – incoming students are far more diverse, with a wider range of learning experiences, language abilities, cultural backgrounds, & so on. Universities simply have to take this into consideration in their practices. On a purely pragmatic basis, having those who can’t cope (read, haven’t ‘hardened up’) simply drop out won’t serve us well when a proportion of our funding rests on completions & retention rates…
And no, taking all the above into consideration does not equate to dumbing down our curriculum! Doing things differently doesn’t mean doing them less well & may in fact improve the effectiveness of our teaching.
Learning objectives… We all have them. Often they’re implicit, unstated; when you go into a lecture you’ll have at the back of your mind the things you hope your students will learn from it. But all the course outlines I’m familiar with have learning objectives/outcomes written in there somewhere. So what’s not to like?
The trouble is that they are often phrased in such a way that students aren’t actually much wiser about what they’re expected to be able to do. ‘Students will understand/demonstrate an understanding of…’ is a common one, & it provokes the question, how will they (or the teacher) know that they understand? Will they be able to use a piece of equipment in the right way & in the right context? Or show their understanding by explaining a concept or correctly interpreting a particular data set? If the outcome is ‘have the attributes of a critical thinker’ then again, how will both parties know that this has been achieved?
So spelling them out has benefits to both parties, & for those of us on the lecturing side of the partnership, it helps us to focus on what we actually intend our students to gain from our teaching. And from our assessment – assessment drives student learning much more strongly than teaching does. So it’s no use having clearly stated learning objectives if we then go on to ignore them in setting assessment items. If the former expect higher-order skills & the latter test mostly lower-order attributes, then which do you think the students will focus on in their revision & exam preparation? (I have to say, having good clear learning objectives makes it much easier to develop good assessment tools!)
And critical thinking? I think I will save that for a whole ‘nother post :-)