Showing posts with label afl. Show all posts
Showing posts with label afl. Show all posts

Saturday, September 10, 2011

Effective Questioning


In 2002 Trevor Kerry estimated that teachers ask on average 43.6 questions per classroom hour. Given this frequency it is perhaps not surprising that they don’t wait very long for an answer: Kathleen Cotton (1988)  estimated that the average time allowed a pupil to begin answering was less than one second.





Some of the purposes of questioning (after Cotton 1988) include:


  • To motivate learners
  • To assess learners
  • To revise previous learning
  • To nurture insights
  • To develop learning skills

"These purposes are generally pursued in the context of classroom recitation, defined as a series of teacher questions, each eliciting a student response and sometimes a teacher reaction to that response." (Cotton 1988). Students have to:

  • Pay attention to the question
  • Understand what the question is asking
  • Think of an answer
  • Articulate the answer.


Cotton's meta-analysis of research led to some general conclusions:


  • Asking questions improves learning
  • Frequent questions improves learning of facts but does not improve learning of complex material (some suggest it can make such learning worse)
  • Oral questions are more effective than written questions
  • Questions which focus attention on key features result in better comprehension



For years, before watching a video, I gave students a list of questions to be answered during the video. I assumed that this would encourage them to pay attention to the video and result in enhanced learning both for the things specifically targeted by the video but also for other information (because they are more focussed). Cotton (1988) seems to imply that this is effective for students who are older, cleverer and more motivated but that younger children and poorer readers will focus exclusively on the questions and may miss out on other material. Nevertheless, it still seems worthwhile.


Another technique I have used in class is the quick-fire quiz. This started when I was teaching a modular Science GCSE which was assessed using short fact-based questions from a question bank. Very quickly I became able to predict the questions so I started every class with a quiz using the same questions. My favourite one was 'What does background radiation come from?' to which the answer was 'the sun [1], the stars [1] and rocks [1] especially granite [1]'. I tried this with a  very low achieving class. One lad averaged 2/60 on these module tests. On the Radioactivity test he scored 6/60, the difference being the 4 marks for the background radiation question. My triumph was spiled only by the realisation that he had given the same answer (the sun, the stars and rocks especially granite) to several questions. 


Nevertheless, rote learning seemed to have a part to play. I persisted. Another class included a very able but very lazy lad (later he became a crack addict) who averaged about 15/60 on these tests because he did absolutely no revision. On the Radioactivity tests he scored 45/60 because I had forced him to do the revision. There are times when you can lead a horse to water and make him drink!


But "Should we be asking questions which require literal recall of text content and only very basic reasoning? Or ought we to be posing questions which call for speculative, inferential and evaluative thinking?" asks Cotton (1988). The research suggests that:
Lower cognitive level questions are better than high cognitive level questions at primary; at secondary it is better to use a mix of levels.
Teachers ask lower level questions to those students they perceive as less able.
If you as lower level questions you should make them easy enough to answer.
Lower level questions are better if you want to teach knowledge (ie lower level questioning for lower level cognitive aims: not a surprise!)

Teaching students to draw inferences results in higher learning gains.
For secondary students using significantly more than the 20% average of higher level questions produces greater learning.
For secondary students using 50% or more higher level questions gives:

  • Better behaviour
  • Longer answers with a greater number of complete sentences
  • A greater number of both relevant answers and relevant questions asked by students
  • A greater number of student-student interactions

Wait time



On average teachers allow students less than one second to answer. They give less time to those perceived as less able.


The best wait time for lower cognitive level questions is 3s. For higher level questions the longer the better.


If you wait longer than 3s this will give:

  • Better achievement
  • Better memory
  • A greater number of responses especially at a high cognitive level. This increase is greater for student reluctant to participate.
  • Longer responses more often backed up by better evidence
  • A greater number of unsolicited responses, student-student interactions and questions posed by students.
  • Teachers will listen to students more carefully and engage in discussions more often.
  • Teachers will expect more of students.
  • Teachers will ask more varied questions with a higher proportion at a higher cognitive level.

Given how important questions are it is worth asking how they can be made more effective. These ideas come from John Mason.
·        Use assertions rather than questions to control behaviour. This keeps questions pure for pedagogy.
·        Develop a questioning classroom by praising pupils for attempting answers and for changing their minds when necessary.
·        Don’t play ‘guess what I’m thinking’ but ask genuine questions. “Be genuinely interested  not only in what learners are thinking, but in how they are thinking, in what connections they are making and not making.”
·        Don’t ‘funnel’ down to the ‘correct’ answer by asking simpler and simpler questions. Many pupils know this game and will wait before answering until the teacher has done all the thinking work!
·        Scaffolding (using direct questions) is good at the start of learning but you need to fade the support away using increasingly indirect prompts so that pupils learn to think for themselves.
·        Learn how to wait a little longer by formulating an answer in your own mind while you are waitin-g for the pupil to respond.
·        Learn what a pupil sees as important in a problem by asking them to read it aloud and listening to the words they stress.
·        Ask meta-questions such as ‘Is this always, sometimes or never true?’ or "What is the same and what is different about …?"
·        Get pupils to classify information or problems and ask them to explain their classification system.
·        Get learners to make up their own questions. 

Thursday, January 13, 2011

Data Mining: a very simple start for a beginner (me)

Data mining is "the process of extracting patterns from data" (wikipedia). A data miner changes data into information. Educational data mining is a process of extracting data about learners and using that data to teach better.

What sort of things can you do?

You can use data to develop categories, clusters and classifications. 

In k-Nearest Neighbour (k-NN) classification a data point is classified by majority vote of its nearest neighbours. If k=1 the green circle will be classed with the red triangles, because its nearest neighbour is a red triangle. If k=3 it will again be red triangles because the majority of the (k=)3 nearest neighbours are triangles. If k=5 it will be classified as a blue square because 3 of the (k=)5 nearest neighbours are blue squares. Clearly the choice of k is critical. An alternate method is to weight the classification by the distance to each of the nearest neighbours.


You can try to discover behaviours which occur together. For example, in the sentence: "This is the life!", there are 2xe, 1xf, 2xh etc. If we only count where there are 2 or more occurrences, the "frequent 1 sequences" are: 2xe, 2xh, 3xi, 2xs, 2xt. If we seek the 2-sequences (only for these) we have: e_, e!, hi, he, is, is, s_, s_, th, th. Using a frequency threshold of 2 again, we are left with is, s_, and th as our frequent 2 sequences. Moving to 3 sequences (again only using those we have identified as frequent 2 sequences) and another threshold of 2 we have only 1 frequent 3 sequence: is_. Moving to 4 sequences we find is_i and is_t. Neither of these pass the threshold so the algorithm stops. What have we learnt? The 1 sequences could tell us something about the commonest letters in English and the 2 sequences tell us that is and th are frequent combinations and that s often happens at the end of words. The 3 sequences tell us that is often happens at the end of words. 

So what?

We now have a predictive framework: if you get an i expect an s (and then a space), if you get an s expect a space, if you get a t expect an h.

We could use this process to create 'recommendations' a la Amazon: if you enjoyed doing those sums you might like to try these. Or diagnoses, enabling us to identify the appropriate intervention for the measured behaviour.



References

Baker, S.J.D. & Yacef, K. (2009) The State of Educational Data Mining in 2009: A Review and Future Visions: http://www.educationaldatamining.org/JEDM/images/articles/vol1/issue1/JEDMVol1Issue1_BakerYacef.pdf accessed 10th January 2011

International Working Group on Educational Data Mining available at http://educationaldatamining.org/ accessed 10th January 2011

Wikipedia Data Mining available at http://en.wikipedia.org/wiki/Data_mining accessed 10th January 2011

Sunday, January 9, 2011

Learning Analytics

"Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs." (Siemens 2010)


Baer 2011 points out that online courses produce a vast amount of data. This can be used to monitor the progress of students on a day to day basis; this in turn can be used to tailor further teaching, for example by putting remedial measures in place.


This is all good formative assessment stuff (also called AfL, assessment for learning or ASL, assessment to support learning). 






But there are three buts to which Baer alludes:

  • You have to know what to assess;
  • You have to be able to assess it accurately;
  • You have to know what to do in response to any particular measurement.
What to assess
Normally we assess through tests and relating the scores to the standards but Wiley 2011 points out that Learning Analytics can measure other things (eg 'how long did the student spend reading the book?') although he realises that it can be difficult to see the correlations between this data and the final achievements.

How to respond
We might use Learning Analytics to create a sort of profile of each learner which could then be used (like Amazon) to provide personalised recommendations of resources and techniques that have created success for similar types of learners (Duval 2011)






How does all this match what Nicol and McFarlane-Dick (2006) describe as "good feedback practice"? Can any automated system feedback as well as an experienced assessor?
N&M-D make it clear that feedback should be prompt so that it can actually influence learning rather than occurring after learning. Clearly the day-by-day assessment envisaged by Baer will do this. But other key elements of good feedback include:
  • Telling teachers how to shape their teaching:
    • On a meta level, Learning Analytics may "help us to realise how much of what we do is not very effective" and so enable teaching pedagogy to evolve (Duval 2011).
  • Making clear what good performance is
  • Helping learners reflect
  • Helping learners to take action to correct deficiencies
  • Encouraging T-S and S-S dialogue
It is clear that these last four could be integrated into the feedback from a Learning Analytics system; it is also clear that they should be used to shape the design of such feedback.




References 



Baer J 2011 video interview with George Siemens available at http://www.learninganalytics.net/?page_id=50 accessed 8th January 2011

Duval E 2011 video interview with George Siemens available at http://www.learninganalytics.net/?page_id=54 accessed 8th January 2011

Nicol D & Macfarlane-Dick D, 2006 Formative assessment and self-regulated learning: A model and seven principles of good feedback practice Studies in higher Education 31(2):199-218

Siemens G 2010 Learning Analytics and Knowledge available at https://tekri.athabascau.ca/analytics/ accessed 8th January 2011

Wiley D 2011 video interview with George Siemens available at http://www.learninganalytics.net/?page_id=46 accessed 8th January 2011

Wednesday, November 17, 2010

Diagnostic assessment

The 'rip, mix, burn' model of pedagogy (see post 14th November 2010) is very compelling but it fails to provide any guidance about assessment.



Edinburgh Castle
Imagine if a person phoned you up and asked you for directions to travel to Edinburgh. I might say: go the the Black Cat roundabout and turn left; continue north up the A1 until you reach Edinburgh. These instructions would be wrong for people travelling from Inverness, Glasgow, London or Cambridge. The first thing to do would be to find out where they were.

You must assess before you teach. If you don't know where a person is in their learning you can't personalise their teaching. Much of what you do teach will be wasted. Your students won't end up where you want them to. Some will get very lost indeed!

MRI scanner
In the 19th Century, medicine shook off the shackles of quackery. Doctors learnt to diagnose. They used technology (thermometers, stethoscopes, etc) to measure key indicators of health. As time moved on they developed more and more high tech diagnostics: X-ray machines, blood pressure cuffs, blood tests, MRI scanners.

By and large education retains the pencil and paper test. These give next to useless information. I attended a parents' evening where the Maths teacher was armed with a formidable array of numbers about my stepdaughters' Maths. What do the numbers mean? I asked. She wasn't very good at Maths. Which bit of Maths? I asked. Maths generally, I was told. What does she need to do to improve? I asked. Try harder and ask for help, I was told.

At least the Maths teacher had tried. Most of the teachers that evening had measured nothing and simply spoke in platitudes. I can't imagine a doctor giving me such vague replies.

So my utopian vision for the future of education is to use technology to improve our diagnostic assessment.