The Muller research being described in the below post was discussed here previously, and is related to the predict-before-demo work that Eric Mazur presentedat last year’s ICER. The uppermost bit here is that data mining can’t get at this level of abstraction in terms of identifying good teaching. I’m also concerned that data mining can’t help if you lose 80% of your subject pool — you can’t learn about people who aren't there.
But even granting that you can get sufficiently rich information about the students, there’s another hard problem. Let’s say that, thanks to the upgrade in your big data infinite improbability drive made possible by your new Spacey space sprocket, your system is able to flag at least a critical mass of videos taught in the Mueller method as having a bigger educational impact on the students the average educational video by some measure you have identified. Would the machine be able to infer that these videos belong in a common category in terms of the reason for their effectiveness? Would it be able to figure out what Muller did? There are lots of reasons why a video might be more effective than average. And many of those ways are internal to the narrative structure of the video. The machine only knows things like the format of the video, the length, what kind of class it’s in, who the creator is, when it was made, and so on. Other than the external characteristics of the video file, it mostly knows what we tell it about the contents. It has no way for it to inspect the video and deduce that a particular presentation strategy is being used. We are nowhere close to having a machine that is smart enough to do what Muller did and identify a pattern in the narrative of the speaker.
Deepa Singh
Business Developer
Web Site:-http://www.gyapti.com
Blog:- http://gyapti.blogspot.com
Email Id:-deepa.singh@soarlogic.com
No comments:
Post a Comment