Talking Teaching

December 12, 2013

Evaluating teaching the hard-nosed numbers way

[This is a copy of a post on my blog PhysicsStop, sci.waikato.ac.nz/physicsstop, 10 December 2013]

Recently there’s been a bit of discussion in our Faculty on how to get a reliable evaluation of people’s teaching. The traditional approach is with the appraisal. At the end of each paper the students get to answer various questions on the teacher’s performance on a five-point Likert Scale (i.e. ‘Always’, ‘Usually’, ‘Sometimes’, ‘Seldom’, ‘Never’.)  For example: “The teacher made it clear what they expected of me.” The response ‘Always’ is given a score of 1, ‘Usually’ is given 2, down to ‘Never’ which is given a score of 5. An averaged response of the questions across students gives some measure of teaching success – ranging in theory from 1.0 (perfect) through to 5.0 (which we really, really don’t want to see happening).

We’ve also got a general question – “Overall, this teacher was effective”. This is also given a score on the same scale.

A question that’s been raised is: Does the “Overall, this teacher was effective” score correlate well with the average of the others?

I’ve been teaching for several years now, and have a whole heap of data to draw from. So, I’ve been analyzing it (for 2008 onwards), and, in the interests of transparency, I’m happy for people to see it.  For myself, the question of “does a single ‘overall’ question get a similar mark to the averaged response of the other questions?” is a clear yes. The graph below shows the two scores plotted against each other, for different papers that I have taught. For some papers I’ve had a perfect score – 1.0 by every student for every question. For a couple scores have been dismall (above 2 on average):

Capture1.JPG

What does this mean? That’s a good question. Maybe it’s simply that a single question is as good as a multitude of questions if all we are going to do is to take the average of something. More interesting is to look at each question in turn. The questions start with “the teacher…” and then carry on as in the chart below, which shows the responses I’ve had averaged over papers and years.
Capture2.JPG
Remember, low scores are good. And what does this tell me? Probably not much that I don’t already know. For example, anecdotally at any rate, the question “The teacher gave me helpful feedback” is a question for which many lecturers get their poorest scores (highest numbers). This may well be because students don’t realize they are getting feedback. I have colleagues who, when they give oral feedback, will prefix what they say with “I am now giving you feedback on how you have done” so that it’s recognized for what it is.
So, another question. How much have I improved in recent years? Surely I am a better teacher now than what I was in 2008. I really believe that I am. So my scores should be heading towards 1.  Well, um, maybe not. Here they are. There are two lines – the blue line is the response to the question ‘Overall, this teacher was effective’, averaged over all the papers I took in a given year; the red line is the average of the other questions, averaged over all the papers. The red line closely tracks the blue – this shows the same effect as seen on the first graph. The two correlate well.
Capture3.JPG
So what’s happening. I did something well around 2010 but since then it’s gone backwards (with a bit of a gain this year – though not all of this year’s data has been returned to me yet). There are a couple of comments to make. In 2010 I started on a Post Graduate Certificate of Tertiary Teaching. I put a lot of effort into this. There were a couple of major tasks that I did that were targeted at implementing and assessing a teaching intervention to improve student performance. I finished the PGCert in 2011. That seems to have helped with my scores, in 2010 at least. A quick peruse of my CV, however, will tell you that this came at the expense of research outputs. Not a lot of research was going on in my office or lab during that time.  And what happened in 2012? I had a period of study leave (hooray for research outputs!) followed immediately by a period of parental leave. Unfortunately, I had the same amount of teaching to do and that got squashed into the rest of the year. Same amount of material, less time to do it, poorer student opinions. It seems a logical explanation anyway.
Does all this say anything about whether I am an effective teacher? Can one use a single number to describe it? These are questions that are being considered. Does my data help anyone to answer these questions? You decide.

September 12, 2013

Who’s the best teacher?

Filed under: Uncategorized — Marcus Wilson @ 9:47 am

[This post is a copy of one I posted yesterday on my blog PhysicsStop. http://sci.waikato.ac.nz/physicsstop   ]

I’ve just come out of a very interesting cross-faculty discussion on effective use of ‘tutors’ in our courses. It’s hard to define the word, because the role of ‘tutor’ means different things in different parts of the university. But, think of it broadly as being someone who is paid (often not very much and on a casual contract) to teach in laboratory classes, give tutorial sessions to students, mark student work, undertake administrative teaching tasks (e.g. attendance registers for laboratory classes) and so forth. Tutors are often the primary contact that students have with teaching staff at the university – students probably feel able to talk to their tutors more freely than they can talk to other academic staff – though that is quite faculty and subject specific.

Their role within the university system is very valuable. Their close contact with students ensures that students feel that they belong and have somewhere they can go with problems. But it’s not the ‘soft’ stuff that’s the only reason for using tutors – take a look at this research paper on the effectiveness of teaching of tenure-track and non-tenure track (adjunct) staff. The work looks at teaching at Northwestern University in the US, across eight years (it’s a sizeable study – looking at 15,000 students). In particular, the study looked beyond a comparison of the teaching effectiveness of the two groups of staff in the courses where both groups taught, and looked at the enrollment and performance of students in subsequent courses. What it found was that students taught by adjuncts (what we might loosely call a ‘tutor’ here) got better grades in subsequent courses, and were more likely to enrol in subsequent courses in that subject.  In other words, the adjuncts were more effective in terms of both long-term student learning and student motivation. The effect was most marked with the weakest students.

The work doesn’t look at why this is the case, though it offers some speculative reasons, including that the tenured staff are recruited for being leaders in their research disciplines, not for being excellent teachers.

This article should make all universities with a two-tier teaching staff system (such as Waikato) sit up and take notice. Just what strategies are we using when it comes to ensuring excellent teaching? Should universities split staff into ‘teaching only staff’ and ‘research only staff”? Are tutors being paid according to the value that they deliver? And, importantly for the students who fork out large amounts of money to go to university – are the students getting value for money from their teachers?

David N. Figlio, Morton O. Schapiro & Kevin B. Soter. Are tenure track professors better teachers? Working Paper 19406, National Bureau of Economic Research, http://www.nber.org/papers/w19406

February 28, 2011

If it ain’t broke, don’t fix it?

Filed under: university — Marcus Wilson @ 1:02 pm

This is a copy of a recent post on my blog PhysicsStop, http://sci.waikato.ac.nz/physicsstop

With teaching semester almost upon us, here’s a thought for you university lecturers out there.

I’ve been at a teaching workshop this afternoon [24 feb 2011], where we’ve been discussing how teaching and research can link together – i.e. that they are not two completely inseparable activities, as we often think.  There were a number of presenters (I was one – I felt really flattered and somewhat of a fraud talking about this subject) and one point that came up was how different people and departments approach the idea of ‘experimenting’ on students.

You can’t improve the way you teach (or the way you do anything) if you’re not prepared to change something and see how it goes. “You can teach for twenty years, or you can teach for one year twenty times”, as one saying goes. But some in the group described how some departments can be very reluctant to let lecturers change anything.  “But what if it doesn’t work?” – they say – “then the students will be worse off. You’ll have harmed their education. If it ain’t broke, why are you thinking about fixing it?”

That’s a fair response if the course and the way you teach it genuinely ‘ain’t broke’.  But how do you know that it ‘ain’t broke?’. I suspect that many people who wheel that line out don’t actually know how effective the teaching in question actually is.  Until you do (and a good score on course appraisals does NOT equal good teaching) you really can’t make any informed choice about whether to leave something alone or to change it.   Remember, not changing anything is a choice of action in just the same way that changing something is a choice.  To those who think it’s unethical to use your students as guinea pigs by trying a different teaching strategy I would ask whether it ethical to deny your students better teaching when it could be available to them. I mean, what would you think if Graham Henry refused to try out any new players because they might not be as good in an All Black jersey as the current ones?

Unless you are prepared to make the effort to find out how good your teaching really is, and to try out schemes that could improve the areas where improvement is required, you will be no better a teacher this year than you were the year before, or the year before that, or the one before that… And it will be your students who suffer most.

December 13, 2010

Experimenting with Experimenting

Filed under: education, science teaching — Tags: , — Marcus Wilson @ 1:35 pm

This is a copy of a post on my blog PhysicsStop  http://sci.waikato.ac.nz/physicsstop

Last week I was at the Australian Institute of Physics congress, in Melbourne.

One of my talks concerned a piece of work I’d done with my second year experimental physics class this year. Before going to Melbourne, I gave the talk a trial run at the University of Waikato’s ‘celebrating teaching’ day. It provoked a few comments then, and a few more in Melbourne, so I thought I’d give a summary of it here.

I’ve been teaching experimental physics more or less for the whole time I’ve been at the university (my divine punishment for navigating my own undergraduate studies on the basis of finding the path with the least amount of practical work in it). I’ve noticed that few students do any planning before the lab. Some will turn up at the lab without even knowing what experiment they will be trying to do. So this year I’ve tried to turn this around.

The great thing about the theory of tertiary education is that it says that when there is a problem, the solution is often easy. And that is to pay attention to what you are assessing. “If you want to change student learning …. change the assessment” ( G. Brown, J. Bull and M. Pendlebury. Assessing Student Learning in Higher Education.   Routledge, London, New York (1997). )  The issue was, I think, that I was never actually getting the students to plan anything. They learn that they can get good marks without doing any preparation beforehand, because the instructions for the lab are pretty well provided to them.

So this year I’ve forced them to prepare for a couple of experiments, by removing the instructions. Instead, I gave them the task they had to do, and  let them get on with working out how it should be done, using what equipment, etc. Since we use some moderately complicated lab equipment, I chose to ‘pair-up’ experiments – one week to introduce them to the equipment, the next to give them an experiment to do (without instructions) that used that equipment. That way, learning to drive the equipment did not become a distraction.

For the most part (around three quarters) students overcame initial hesitations (horror?) and tackled this very well. Most enjoyed it, and thought the approach was beneficial. However, the other quarter really didn’t like it. Appraisal forms, a focus group, and casual conversations in the lab with the students tell me this.

I gave my talk and there was a fair bit of discussion afterwards. The audience (mostly filled with secondary teachers and tertiary teachers with a strong interest in education) thought that the way that these experiments were assessed needed very careful thought to get the most out of the students. Was I assessing the ‘planning’ task itself (and how?), the end results of the planning, or something else. I thought I was assessing ‘planning’, as well as how well the student carried out and documented  the experiment after the planning, but possibly it was not transparent enough to some of the students.  That’s worth working on for next year. 

Also, was I concerned that students might get their experiment ‘planned’ by someone else? E.g. consult another student in the group that had done this experiment in a previous week. Personally, this doesn’t bother me – in fact, I would encourage such consultation as it shows students are taking the task seriously. If a student finds it easier to learn from other students rather than from me, I have no problem with that. If the end result is that he or she learns (and I mean ‘learn’ not ‘parrot’) what I wish them to learn (which is more than just facts) then I have no problem with whatever route they take.

I was encouraged by a final comment by a lecturer who had done a similar thing with a large first-year class (in contrast to my small second-year class) and found very similar results – generally successful and well-liked by students, but with a significant minority that had strong views the other way.

November 17, 2010

Learning Outcomes

Filed under: university — Tags: , , — Marcus Wilson @ 4:31 pm

This is a copy of a post on my blog PhysicsStop.

This week I’ve had three fairly lively discussions about learning outcomes in our university papers.  (It’s well blogged already – e.g. here, but I’ll add some things to the mix). The concept is hardly new, but it is only just being given a really wide profile here at Waikato. Although many individual teachers, and many departments, have routinely written learning outcomes for their papers up to this point, it is now becoming mandatory. This is causing a bit of anxiety.

I honestly think that most of the adverse reaction is because it is seen as being another piece of administration work to do that has nothing to do with the task of actually teaching. In fact, it has everything to do with the task of teaching. Simply put, if you don’t know what the learning outcomes for your paper are, your teaching really has no purpose. 

So, for you non-teachers out there, what am I talking about?  A learning outcome for a course is a statement of what learning we want the students to have on going through this course, but given in such a way that it tells us how we would know (and the student would know) if the student has reached this learning. Biggs and Tang put it as “…a statement of how we would recognize if or how well students have learned what is intended they should learn.”  (Biggs, J. & Tang, C. (2007). Teaching for quality learning at university. Maidenhead, U.K.: Open University Press.)

This excludes words like ‘understand’ and ‘know'; they are too vague; my idea of understanding could be very different from a student’s.   Instead, how would my students be able to demonstrate that understanding?

So, an example. I would like my students to understand ‘skin depth’. That’s not a good learning outcome – it doesn’t indicate how that understanding could be demonstrated – so I ask myself how my students could demonstrate that they understand.  They could do that by calculating a value for skin depth for a particular situation and then interpreting what this means with regard to how far an electromagnetic wave will penetrate into a material.   So I could write “A student (who successfully completes this paper) will be able to calculate a value for skin depth in different electromagnetic scenarios and discuss critically the significance of their result in terms of the penetration of electromagnetic radiation.” 

There is content in the learning outcome, but it isn’t a list of content (That would be something like…   “a student will study….skin depth and penetration of electromagnetic radiation into matter.”)  Learning outcomes are suggestive of assessment tasks – they have to be – since learning needs to be demonstrated. So, for my example, I could assess my students’ ability to meet the learning outcome through having them do a calculation of skin depth, and asking them to comment on what their result implies.

Note that the assessment follows from the learning outcomes, not the other way around. That is, we don’t take the same old assessment that we’ve been using for the last ten years and just write some outcomes based on it. We write assessments based on what we want the students to learn (we all know students learn that which gets them through the assessments).   I used to moan that my students didn’t want to learn physics, they just wanted to pass the exam. The solution was obvious (so obvious that it only took me a few years to grasp…) – if the exam assesses the physics I want students to learn, they will learn it. And probably enjoy it a whole lot more as well.

August 12, 2010

Some thoughts on assessment

Filed under: Uncategorized — Marcus Wilson @ 10:16 am

This is a copy of a post I put last week on my home blog PhysicsStop  http://sci.waikato.ac.nz/physicsstop

I went to a very interesting seminar this morning [5 August]. Phil Race, from the UK, was presenting about making assessments better in tertiary teaching. There was a lot in his talk (you can download it and other information from www.phil-race.co.uk ) – I’ll just summarise some of the points that are most interesting to me.

1. Assessment started going downhill when, in 1791, the University of Cambridge introduced the first written exam. (Before that, it was purely oral).  Not sure that this is ever likely to change – but I can certainly say that in my experience students seem to appreciate feedback a lot more when it is given in person.

2. Don’t put a mark or grade on a student’s assignment when you return it to them. The student will become focused on the grade, to the point of ignoring all your written feedback.

3. Instead, let them work out what their grade should be, based on the feedback you give and how their work compares to that of their peers. I tried this out very briefly this afternoon in a lab class. I normally mark student lab reports by spending a few minutes the following week with the student and going through their report together (see point 1). Today I asked my poor unsuspecting students what mark they reckoned they should get.   All but one was spot-on – their assessment was the same as mine. The other one was harsh on himself – I thought his work was of better quality than he did, and I was able to explain why.

4. Never ask a student ‘Do you understand?’ This is likely to trigger the following train of thought:

What is it he wants me to understand? What if I don’t understand it? Will he think I’m stupid? Will my friends think I’m stupid? Will he ask me more awkward questions? How much do I have to understand? Is it a hint that this will be in the exam? etc. etc.

So the student answers …. Hmmm… I’m not sure…which gets no-one anywhere.

And 5. There is so much literature about what works and doesn’t work with assessment that there shouldn’t be any excuse for carrying on with the same methods that we know aren’t much good. Just go and do what works.   As the Oracle of Delphi is supposed to have said “You know what the problem is… you know what the solution is…. now go and do it”

July 21, 2010

Aaaarrhh First Year

Filed under: Uncategorized — Marcus Wilson @ 3:53 pm

This is a copy of a post made today on my home blog PhysicsStop http://sci.waikato.ac.nz/physicsstop

It’s no secret that I don’t like teaching first year classes.  I find third year undergraduates far easier to teach. I think the main reason for this is that with the third years I don’t have such a large gap between my knowledge of the subject and theirs. That means that I don’t need to think so much about whether I am using words they are not familiar with, or whether my explanation draws on contexts and phenomena that the class hasn’t seen before. I know others take the opposite view – third year classes are harder because the material is more advanced – but to me that’s not a problem. What is a problem is communicating, and it is easier for me to do so with students who are closer to my ways of thinking.  Plus third years tend to speak a lot more and let you know when they don’t follow something, so it is less easy to lose a whole class without knowing it.

On Monday I did a first year tutorial in which I ended up in a horrible tangle  trying to explain something that to me is really simple. To be fair on myself, I think the question that I had to explain (which came from a website) was badly put together, but I should have done rather better than I did. First year teaching takes real practice (I think it does, anyway) . I’m very envious of people like Alison Campbell who excel in teaching large groups of first years.

As part of my PGCert in Tertiary Teaching, I experimented last semester with a method of finding out whether my class (a second year one in this case) is with me or not. (See for example Turpen and Finkelstein, Physical Review Special Topics, Physics Education Research 5, 020101 (2009) ) It’s a well-used method in physics teaching, though I gave it a bit of adeptation for my class. Essentially its formative assessment – ask the class multiple choice questions at the beginning of the lecture relating to last lecture’s material and have the class discuss it in pairs – not to test them for the sake of allocating marks, but for me to know where their understanding is at.  It worked well, I think – there were questions that the class struggled with that I thought they’d have grasped easily. That has got to be good overall for the students, because it allows me to go and unpick their reasoning and correct misconceptions. In a subject like physics, where so often one concept is built on another, the teacher (me) needs to know whether the students have that foundation or not – if not, there really is no point going on.

That’s another reason why I find third years easier to teach – by the time they reach third year, they have grasped those underlying concepts (if not, they’d be failing bigtime in second year). That means less preparation on my part is required. Maybe I’m just lazy.

June 10, 2010

What equation do I need?

Filed under: Uncategorized — Tags: , , — Marcus Wilson @ 3:19 pm

This is a copy of a recent post to my blog http://sci.waikato.ac.nz/physicsstop

With A-semester exams looming, the students here at Waikato are becoming a little more focused on their work. That inevitably means that I get more of them coming to me after a lecture, or knocking on the door of my office. And that is good.

One of the most common questions I get, usually in relation to an assignment, or a past exam paper, is ‘What equation do I need to solve this?’. I have slowly come to the conclusion (by slow, I mean six years) that when a student says this he actually means the following:

1. I don’t understand this

2. But I don’t mind that I don’t understand, I just need to know what to do to answer the question (and pass the assignment, exam etc.)

It’s the second one that is interesting. Any person can put numbers into an equation and come up with an answer, but it doesn’t necessarily add to their understanding. But unfortunately it can add to their ability to pass examinations, which is what drives students. And giving students that understanding  is part of what teaching a Bachelor of Science degree is about. Without it, a student cannot hope to apply your learning to new situations. Remember, that is what real scientists (e.g. physicists) do. No-one gets a science job that involves putting numbers into well established formulae. For example, our graduate profile for a BSc degree says a BSc graduate should have

 “Skills, knowledge and attributes needed to contribute directly and constructively to specific aspects of the building of a science based knowledge economy in New Zealand”
 
That is what I need to be building in my students – the ability to do just this. It is the scientist who will drive the economy forward and solve the world’s major problems. Will our BSc graduates be able to embark down this path? Sure, a lot of science learning occurs after a BSc, but a BSc shows that someone is reasonably compentent in their use of science, enough to contribute positively. How can you contribute positively if you don’t care that you don’t understand something. (point 2 above).
 
If we produce BSc graduates who are skilled in putting numbers into formulae and nothing else we are devaluing the BSc, denying the country good scientists (and therefore harming the economy) and short-changing the tax payer who gives the majority of the money to the universities to educate students. So when I get asked ‘What equation do I need’ I need to stop and think? – What does the student really want, and is it in his best interests (and the country’s) to give him that?

N.B. I could also say the point is that we, the teachers, need to set decent assignments, that mean stuffing-numbers-into-formulae isn’t sufficient to pass.

March 24, 2010

Mind games for physicists

Filed under: Uncategorized — Tags: , — Marcus Wilson @ 4:54 pm

This is a copy of a post on my blog http://sci.waikato.ac.nz/physicsstop.  It’s talking about physics, but I suspect that there is a lot of carry over to other areas of science….

Here’s a gem of a paper from Jonathan Tuminaro and Edward Redish.

The authors have carried out a detailed analysis of the discussions a group of physics students had when solving a particular problem. They’ve worked hard (the researchers, as well as the students) – the first case study they chose was a conversation 45 minutes long.

While tackling the problem, the students have ‘played’ several epistemic games – or, put more simply, have used different ways of thinking. There are six different games identified – corresponding to six distinctly different ways of thinking about the same problem.  Students don’t stick to one game though, they can flip between several. Very quickly, they are:

1. Mapping meaning to mathematics.   This is where the students work out what is going on (or what they think is going on) and put it into a mathematical form (e.g. to make an equation) – then the equation can be used for things.

2. Mapping mathematics to meaning.  Kind of the reverse of (1). Here the students start with a mathematical expression they know, and work out what it might mean in practice.

3. Physical Mechanism Game. In this game the students try to draw sense from their own intuition of the physical principles involved.

4. Pictorial Analysis Game.  Here diagrams are used as the major step.

5. Recursive Plug-and-Chug. I’ll quote from the authors here, because they do it so well: “[here the students] plug quantities into physics equations and churn out numeric answers, without conceptually understanding the physical implications of their calculations.”   (The emphasis is mine.)

6. Transliteration to mathematics. Here the students draw from a worked example of another, similar problem, and try to map quantities from problem A onto quantities of problem B.

Now, I ask myself, which methods do I see my students doing in my classes, and using in the assignments I set.  I have to say that in many cases I’m not sure – and probably my teaching is the worse for it.  I can say which games I would like to see students using (1 to 4) and which games would make me shudder (5 and 6 – in which the students develop no physical understanding about what is happening),  but do I know?  There are certainly ways of getting students to use the ‘right’ games, notably setting the right kind of assessment questions.

OK, so which games do I play most in my research?  I’d say probably 1, 2 and 3.  I do a lot of physical modelling, in which I represent a problem (e.g. how do neurons in the brain behave in a certain environment) through a series of equations (game 1) and then work out the implications of those equations (game 3). I also draw a lot from my intuition about physics (e.g. if you increase the pressure across a pipe, you’ll get more flow, regardless of what shape the pipe is) – that’s game 3.

Finally, those physicists among you might like to know what problem the students had to solve. It was this. Three electrical charges, q1, q2 and q3 are arranged in a line, with equal distance between q1 & q2, and q2 & q3.  Charges q1 and q2 are held fixed. Charge q3 is not fixed in place, but is held in a constant position by the electrostatic forces present.  If q2 has the charge Q, what charge does q1 have?

    o    q1                   o   q2                    o  q3

The authors say that most experienced physics teachers can solve this problem in less than a minute. I solved it in about five seconds, using game 3, with a tiny smattering of game 2.  The students concerned (3 of them together) took 45 minutes – this massive difference is perhaps interesting in its own right.

Reference (it’s well worth a read if you teach physics at any level):  Tuminaro, J. and Redish, E. F. (2007) Elements of a cognitive model of physics problem solving: Epistemic games. Physical review special topics – Physics education research (3) 020101.   DOI: 10.1103/PhysRevSTPER.3.020101.

March 19, 2010

Mobile phone physics

Filed under: university — Tags: , , — Marcus Wilson @ 3:51 pm

This post is a copy from my (Marcus Wilson’s) blog physicsstop. ( http://sci.waikato.ac.nz/physicsstop )

Just occasionally, I have a crazy thought regarding a physics demonstration.   This is one that I’m thinking about inflicting on my third year electromagnetism class.  

We’ve been discussing the way electromagnetic waves travel (or rather, do not travel) through electrical conductors. Basically, conductors allow electric currents to flow in response to an applied electric field (in simple terms this just means applying a voltage). Electromagnetic waves such as visible light, radio and X-rays contain electric fields, so when one hits a conductor electric currents flow. Flowing currents heat up a material. Where does this heat energy come from? From the wave. In other words, conductors suck out energy from an electomagnetic wave, and, broadly speaking,  the wave can only penetrate so far into the conductor. This distance is what’s known as the ‘skin depth’.

Skin depth depends importantly on two things – the conductivity of the material and the frequency of the wave. The higher the conductivity, or the higher the frequency, the smaller the skin depth.  Thus, if you consider the waves to/from a mobile phone (frequency of around 1000 MHz) travelling through aluminium (a very good conductor) the skin depth turns out to be small indeed – microns in size.  That means wrapping a phone in aluminium foil will prevent it from picking up a signal. I’ve already shown this in class.

But – here’s the crazy thought – what about water?   Distilled water is a pretty non-conductive, but what comes out of the tap is loaded with dissolved salts and has a moderate conductivity, albeit several orders of magnitude below aluminium foil.   What’s its skin depth for  mobile phone frequencies?  I’ve done some quick back-of-the-envelope, and I reckon something of the order  few centimetres.  So….I predict that if we put the phone in just a few millimetres of water (YES, it needs waterproofing first!) it will still receive a signal, but suspend it in the middle of a swimming pool and there’s going to be no reception at all.

 I reckon that getting my class to estimate how much water would be required to shut out the signal, and then design an experiment (that might or might not need to include ‘borrowing’ the university swimming pool for a short while) would be a great way to get them to think about the various issues themselves.  There’s plenty of literature to back up that assertion – e.g. Etkina et al., American Journal of Physics 74(11), p979  (2006). The best thing is that I can’t be tempted to tell them the answer –  because I don’t know it – I haven’t done the experiment myself. Though I have found this YouTube…

The WordPress Classic Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 164 other followers