Friday, 21 December 2012

Defining: What does it mean to understand computing?

In the About page for this blog, I wrote, “Computing Education Research is about how people come to understanding computing, and how we can facilitate that understanding.” Juha Sorva’s dissertation (now available!) helped me come to an understanding of what it means to “understand computing.” I describe a fairly technical (in terms of cognitive and learning sciences) definition, which basically is Juha’s. I end with some concrete pedagogical recommendations that are implied by this definition.

A Notional Machine: Benedict DuBoulay wrote in the 1980′s about a “notional machine,” that is, an abstraction of the computer that one can use for thinking about what a computer can and will do. Juha writes:
Du Boulay was probably the first to use the term notional machine for “the general properties of the machine that one is learning to control” as one learns programming. A notional machine is an idealized computer “whose properties are implied by the constructs in the programming language employed” but which can also be made explicit in teaching (du Boulay et al., 1981; du Boulay, 1986). The notional machine is how to think about what the computer is doing. It doesn’t have to be about the CPU at all. Lisp and Smalltalk each have small, well-defined notional machines — there is a specific definition of what happens when the program executes, in terms of application of S-expressions (Lisp) and in terms of message sending to instances of classes (Smalltalk). C has a different notional machine, which isn’t at all like Lisp’s or Smalltalk’s. C’s notional machine is closer to the notional machine of the CPU itself, but is still a step above the CPU itself (e.g., there are no assignment statements or types in assembly language). Java has a complicated notional machine, that involves both object-oriented semantics and bit-level semantics.

A notional machine is not a mental representation. Rather, it’s a learning objective. I suggest that understanding a realistic notional machine is implicitly a goal of computational thinking. We want students to understand what a computer can do, what a human can do, and why that’s different. For example, a computer can easily compare two numbers, can compare two strings with only slightly more effort, and has to be provided with an algorithm (that is unlikely to work like the human eye) to compare two images. I’m saying “computer” here, but what I really mean is, “a notional machine.” Finding a route from one place to another is easy for Google Maps or my GPS, but it requires programming for a notional machine to be able to find a route along a graph. Counting the number of steps from the top of the tree to the furthest leaf is easy for us, but hard for novices to put in an algorithm. While it’s probably not important for everyone to learn that algorithm, it’s important for everyone to understand why we need algorithms like that — to understand that computers have different operations (notional machines) than people. If we want people to understand why we need algorithms, and why some things are harder for computers than humans, we want people to understand a notional machine.

Mental Models: A mental model is a personal representation of some aspect of the world. A mental model is executable (“runnable” in Don Norman’s terms) and allows us to make predictions. When we turn on and off a switch, we predict that the light will go on and off. Because you were able to read that sentence and know what I meant, you have a mental model of a light which has a switch. You can predict how it works. A mental model is absolutely necessary to be able to debug a program: If you have to have a working expectation of what the program was supposed to do, and how it was supposed to get there, so that you can compare what it’s actually doing to that expectation.

So now I can offer a definition, based on Juha’s thesis To understand computing is to have a robust mental model of a notional machine. My absolutely favorite part of Juha’s thesis is his Chapter 5, where he describes what we know about how mental models are developed. I’ve already passed on the PDF of that chapter to my colleagues and student here at Georgia Tech. He found some fascinating literature about the stages of mental model development, about how mental models can go wrong (it’s really hard to fix a flawed mental model!), and about the necessary pieces of a good mental model. DeKleer and Brown provide a description of mental models in terms of sub-models, and tell us what principles are necessary for “robust” mental models. The first and most important principle is this one (from Juha Sorva’s thesis, page 55):
  • The no-function-in-structure principle: the rules that specify the behavior of a system component are context free. That is, they are completely independent of how the overall system functions. For instance, the rules that describe how a switch in an electric circuit works must not refer, not even implicitly, to the function of the whole circuit. This is the most central of the principles that a robust model must follow.
When we think about a switch, we know that it opens and closes a circuit. A switch might turn on and off a light. That would be one function for the switch. A switch might turn on and off a fan. That’s another function for a switch. We know what a switch does, completely decontextualized from any particular role or function. Thus, a robust mental model of a notional machine means that you can talk about what a computer can do, completely apart from what a computer is doing in any particular role or function. A robust mental model of a notional machine thus includes an understanding of how an IF or WHILE or FOR statement works, or what happens when you call a method on an object in Java (including searching up the class hierarchy), or how types do – completely independently of any given program. If you don’t know the pieces separately, you can’t make predictions, or understand how they work a particular function in a particular program.

It is completely okay to have a mental model that is incomplete. Most people who use scissors don’t think about them as levers, but if you know physics or mechanical engineering, you understand different sub-models that you can use to inform your mental model of how scissors work. You don’t even have to have a complete mental model of the notional machine of your language. If you don’t have to deal with casting to different types, then you don’t have to know it. Your mental model doesn’t have to encompass the notional machine. You just don’t want your mental model to be wrong. What you know should be right, because it’s so hard to change a mental model later. These observations lead me to a pedagogical prediction. Most people cannot develop a robust mental model of a notional machine without a language. Absolutely, some people can understand what a computer can do without having a language given to them. Turing came up with his machine, without anyone telling him what the operations of the machine could do. But very few of us are Turings. For most people, having a name (or a diagram — visual notations are also languages) for an operation (or sub-model, in DeKleer and Brown terms) makes it easier for us to talk about it, to reference it, to see it in the context of a given function (or program).

I’m talking about programming languages here in a very different way than how they normally enter into our conversation. In much of the computational thinking discussion, programming is yet another thing to learn. It’s a complexity, an additional challenge. Here, I’m talking about languages as a notation which makes it easier to understand computing, to achieve computational thinking. Maybe there isn’t yet a language that achieves these goals. Here’s another pedagogical recommendation that Juha’s thesis has me thinking about.We need to discuss both structure and function in our computing classes.I suspect that most of the time when I describe “x = x + 1″ in my classes, I say, “increment x.” But that’s the function. Structurally, that’s an assignment statement. Do I make sure that I emphasize both aspects in my classes? They need both, and to have a robust mental model, they probably need the structure emphasized more than the function.

We see that distinction between structure and function a lot in Juha’s thesis. Juha not only does this amazing literature review, but he then does three studies of students using UUhistle. UUhistle works for many students, but Juha also explores when it didn’t — which may be more interesting, from a research perspective. A common theme in his studies is that some students didn’t really connect the visualization to the code. They talk about these “boxes” and do random walks poking at graphics. As he describes in one observation session (which I’m leaving unedited, because I enjoyed the honesty of Juha’s transcripts):


What Juha describes isn’t unique to program visualization systems. I suspect that all of us have seen or heard something pretty similar to the above, but with text instead of graphics. Students do “random walks” of code all the time. Juha talks a good bit about how to help his students better understand how UUhistle graphical representations map to code and to the notional machine. Juha gives us a conceptual language to think about this with. The boxes and “incomprehensible things” are structures that must be understood on their own terms, in order to develop robust mental models, and understood in terms of their function and role in a program. That’s a challenge for us as educators. So here’s the full definition: Computing education research is about understanding how people develop robust models of notional machines, and how we can help them achieve those mental models.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

No comments:

Post a Comment