Friday 21 December 2012

Science of Spatial Learning: Nora Newcombe at NCWIT

Great to see this coverage of SILC in US News and World Report, and I’m excited to hear Dr. Nora Newcombe speak at the NCWIT Summit Tuesday of this week. As I’ve mentioned previously, SILC hasn’t looked much at computer science yet, butthere are lots of reasons to think that spatial learning plays an important role in computing education. Spatial reasoning, which is the ability to mentally visualize and manipulate two- and three-dimensional objects, also is a great predictor of talent in science, technology, engineering and math, collectively known as STEM.

Yet, “these skills are not valued in our society or taught adequately in the educational system,” says Newcombe, who also is principal investigator for the Spatial Intelligence and Learning Center. “People will readily say such things as ‘I hate math,’ or ‘I can’t find my way when I’m lost,’ and think it’s cute, whereas they would be embarrassed to say ‘I can’t read.’ “People have a theory about this skill, that it’s innate at birth and you can’t develop it, and that’s really not true,” she adds. “It’s probably true that some people are born with a better ability to take in spatial information, but that doesn’t mean if you aren’t born with it, you can’t change. The brain has a certain amount of plasticity.”

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Defining: What does it mean to understand computing?

In the About page for this blog, I wrote, “Computing Education Research is about how people come to understanding computing, and how we can facilitate that understanding.” Juha Sorva’s dissertation (now available!) helped me come to an understanding of what it means to “understand computing.” I describe a fairly technical (in terms of cognitive and learning sciences) definition, which basically is Juha’s. I end with some concrete pedagogical recommendations that are implied by this definition.

A Notional Machine: Benedict DuBoulay wrote in the 1980′s about a “notional machine,” that is, an abstraction of the computer that one can use for thinking about what a computer can and will do. Juha writes:
Du Boulay was probably the first to use the term notional machine for “the general properties of the machine that one is learning to control” as one learns programming. A notional machine is an idealized computer “whose properties are implied by the constructs in the programming language employed” but which can also be made explicit in teaching (du Boulay et al., 1981; du Boulay, 1986). The notional machine is how to think about what the computer is doing. It doesn’t have to be about the CPU at all. Lisp and Smalltalk each have small, well-defined notional machines — there is a specific definition of what happens when the program executes, in terms of application of S-expressions (Lisp) and in terms of message sending to instances of classes (Smalltalk). C has a different notional machine, which isn’t at all like Lisp’s or Smalltalk’s. C’s notional machine is closer to the notional machine of the CPU itself, but is still a step above the CPU itself (e.g., there are no assignment statements or types in assembly language). Java has a complicated notional machine, that involves both object-oriented semantics and bit-level semantics.

A notional machine is not a mental representation. Rather, it’s a learning objective. I suggest that understanding a realistic notional machine is implicitly a goal of computational thinking. We want students to understand what a computer can do, what a human can do, and why that’s different. For example, a computer can easily compare two numbers, can compare two strings with only slightly more effort, and has to be provided with an algorithm (that is unlikely to work like the human eye) to compare two images. I’m saying “computer” here, but what I really mean is, “a notional machine.” Finding a route from one place to another is easy for Google Maps or my GPS, but it requires programming for a notional machine to be able to find a route along a graph. Counting the number of steps from the top of the tree to the furthest leaf is easy for us, but hard for novices to put in an algorithm. While it’s probably not important for everyone to learn that algorithm, it’s important for everyone to understand why we need algorithms like that — to understand that computers have different operations (notional machines) than people. If we want people to understand why we need algorithms, and why some things are harder for computers than humans, we want people to understand a notional machine.

Mental Models: A mental model is a personal representation of some aspect of the world. A mental model is executable (“runnable” in Don Norman’s terms) and allows us to make predictions. When we turn on and off a switch, we predict that the light will go on and off. Because you were able to read that sentence and know what I meant, you have a mental model of a light which has a switch. You can predict how it works. A mental model is absolutely necessary to be able to debug a program: If you have to have a working expectation of what the program was supposed to do, and how it was supposed to get there, so that you can compare what it’s actually doing to that expectation.

So now I can offer a definition, based on Juha’s thesis To understand computing is to have a robust mental model of a notional machine. My absolutely favorite part of Juha’s thesis is his Chapter 5, where he describes what we know about how mental models are developed. I’ve already passed on the PDF of that chapter to my colleagues and student here at Georgia Tech. He found some fascinating literature about the stages of mental model development, about how mental models can go wrong (it’s really hard to fix a flawed mental model!), and about the necessary pieces of a good mental model. DeKleer and Brown provide a description of mental models in terms of sub-models, and tell us what principles are necessary for “robust” mental models. The first and most important principle is this one (from Juha Sorva’s thesis, page 55):
  • The no-function-in-structure principle: the rules that specify the behavior of a system component are context free. That is, they are completely independent of how the overall system functions. For instance, the rules that describe how a switch in an electric circuit works must not refer, not even implicitly, to the function of the whole circuit. This is the most central of the principles that a robust model must follow.
When we think about a switch, we know that it opens and closes a circuit. A switch might turn on and off a light. That would be one function for the switch. A switch might turn on and off a fan. That’s another function for a switch. We know what a switch does, completely decontextualized from any particular role or function. Thus, a robust mental model of a notional machine means that you can talk about what a computer can do, completely apart from what a computer is doing in any particular role or function. A robust mental model of a notional machine thus includes an understanding of how an IF or WHILE or FOR statement works, or what happens when you call a method on an object in Java (including searching up the class hierarchy), or how types do – completely independently of any given program. If you don’t know the pieces separately, you can’t make predictions, or understand how they work a particular function in a particular program.

It is completely okay to have a mental model that is incomplete. Most people who use scissors don’t think about them as levers, but if you know physics or mechanical engineering, you understand different sub-models that you can use to inform your mental model of how scissors work. You don’t even have to have a complete mental model of the notional machine of your language. If you don’t have to deal with casting to different types, then you don’t have to know it. Your mental model doesn’t have to encompass the notional machine. You just don’t want your mental model to be wrong. What you know should be right, because it’s so hard to change a mental model later. These observations lead me to a pedagogical prediction. Most people cannot develop a robust mental model of a notional machine without a language. Absolutely, some people can understand what a computer can do without having a language given to them. Turing came up with his machine, without anyone telling him what the operations of the machine could do. But very few of us are Turings. For most people, having a name (or a diagram — visual notations are also languages) for an operation (or sub-model, in DeKleer and Brown terms) makes it easier for us to talk about it, to reference it, to see it in the context of a given function (or program).

I’m talking about programming languages here in a very different way than how they normally enter into our conversation. In much of the computational thinking discussion, programming is yet another thing to learn. It’s a complexity, an additional challenge. Here, I’m talking about languages as a notation which makes it easier to understand computing, to achieve computational thinking. Maybe there isn’t yet a language that achieves these goals. Here’s another pedagogical recommendation that Juha’s thesis has me thinking about.We need to discuss both structure and function in our computing classes.I suspect that most of the time when I describe “x = x + 1″ in my classes, I say, “increment x.” But that’s the function. Structurally, that’s an assignment statement. Do I make sure that I emphasize both aspects in my classes? They need both, and to have a robust mental model, they probably need the structure emphasized more than the function.

We see that distinction between structure and function a lot in Juha’s thesis. Juha not only does this amazing literature review, but he then does three studies of students using UUhistle. UUhistle works for many students, but Juha also explores when it didn’t — which may be more interesting, from a research perspective. A common theme in his studies is that some students didn’t really connect the visualization to the code. They talk about these “boxes” and do random walks poking at graphics. As he describes in one observation session (which I’m leaving unedited, because I enjoyed the honesty of Juha’s transcripts):


What Juha describes isn’t unique to program visualization systems. I suspect that all of us have seen or heard something pretty similar to the above, but with text instead of graphics. Students do “random walks” of code all the time. Juha talks a good bit about how to help his students better understand how UUhistle graphical representations map to code and to the notional machine. Juha gives us a conceptual language to think about this with. The boxes and “incomprehensible things” are structures that must be understood on their own terms, in order to develop robust mental models, and understood in terms of their function and role in a program. That’s a challenge for us as educators. So here’s the full definition: Computing education research is about understanding how people develop robust models of notional machines, and how we can help them achieve those mental models.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Why high-income students do better: It’s not the velocity but the acceleration

Low-income students and schools are getting better, according to this study. They’re just getting better so much more slowly than the wealthy students and schools. Both are getting better incrementally (both moving in the right direction), but each increment is bigger for the rich (acceleration favors the rich). We heard something similar from Michael Lach last week. The NSF CE21 program organized a workshop for all the CS10K efforts focused on teacher professional development. It was led by Iris Weiss who runs one of the largest education research evaluation companies. Michael was one of our invited speakers, on the issue of scaling. Michael has been involved in Chicago Public Schools for years, and just recently from a stint at the Department of Education. He told us about his efforts to improve reading, math, and science scores through a focus on teacher professional development. It really worked, for both the K-8 and high school levels. Both high-SES (socioeconomic status) and low-SES students improved compared to control groups. But the gap didn’t get smaller.

Despite public policy and institutional efforts such as need-blind financial aid and no-loan policies designed to attract and enroll more low-income students, such students are still more likely to wind up at a community college or noncompetitive four-year institution than at an elite university, whether a member of the Ivy League or a state flagship.The study, “Running in Place: Low-Income Students and the Dynamics of Higher Education Stratification,” will be published next month in Educational Evaluation and Policy Analysis, but an abstract is already available on the journal’s website.“I think [selective colleges] very much want to bring in students who are low-income, for the most part,” said Michael N. Bastedo, the study’s lead author and an associate professor of higher education at the University of Michigan. “The problem is, over time, the distance between academic credentials for wealthy students and low-income students is getting longer and longer…. They’re no longer seen as competitive, and that’s despite the fact that low-income students are rising in their own academic achievement.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Next Generation Science Standards available for comment now through 1 June

Check out “Gas station without pumps” for more on the Next Generation Science Standards, available now for comment (but only through this week). There is a bit of computational thinking and computing education in there, but buried (as the blog post points out). I know that there is a developing effort to get more computation in there. The first public draft of the Next Generation Science Standards is available from May 11 to June 1. We welcome and appreciate your feedback. [The Next Generation Science Standards]

Note that there are only 3 weeks given for the public review of this draft of the science standards, and that time is almost up. I’ve not had time to read the standards yet, and I doubt that many others have either. We have to hope that someone we respect has enough time on their hands to have done the commenting for us (but the people I respect are all busy—particularly the teachers who are going to have to implement the standards—so who is going to do the commenting?).

I’m also having some difficulty finding a document containing the standards themselves. There are clear links to front matter, how to interpret the standards, a survey for collecting feedback, a search interface, and various documents about the standards, but I had a hard time finding a simple link to a single document containing all the standards. It was hidden on their search page, rather than being an obvious link on the main page.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Thursday 20 December 2012

Visual ability predicts a computer science career: Why? And can we use that to improve learning?

I've raised this question before, but since I just saw Nora New combe speak at NCWIT, I thought it was worth raising the issue again. Here’s my picture of one of her slides — could definitely have used jitters removal on my camera, but I hope it’s clear enough to make the point.


This is from a longitudinal study, testing students’ visual ability, then tracking what fields they go into later. Having significant visual ability most strongly predicts an Engineering career, but in second place (and really close) is “Mathematics and Computer Science.” That score at the bottom is worth noting: Having significant visual ability is negatively correlated with going into Education. Nora points out that this is a significant problem. Visual skills are not fixed. Training in visual skills improves those skills, and the effect is durable and transferable. But, the researchers at SILC found that teachers with low visual skills had more anxiety about teaching visual skills, and those teachers depressed the impact on their students. A key part of Nora’s talk was showing how the gender gap in visual skills can be easily reduced with training (relating to the earlier discussion about intelligence), such that women perform just as well as men.

The Spatial Intelligence and Learning Center (SILC) is now its sixth year of a ten year program. I don’t think that they’re going to get to computer science before the 10th year, but I hope that someone does. The results in mathematics alone are fascinating and suggest some significant interventions for computer science. For example, Nora mentioned an in-press paper by Sheryl Nor by showing how teaching students how to improve their spatial skills improved their performance in Calculus, and I have heard that she has similar results about computer science. Could we improve learning in computer science (especially data structures) by teaching spatial skills first?

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Women leave academia more than men, but greater need to change in computing

I did my monthly post at Blog@CACM on the some of the recent data on how few women there were in computing. I suggested that things haven’t got better in the last 10 years because we really haven’t decided that there’s a problem with under-representation. The comments to that post suggest that I’m right. Blog@CACM posts don’t often get comments. Three in a week is a lot, and two of those expressed the same theme, “Women are choosing not to go into IT. Why is that a problem?” It’s a problem because there are too few people in IT, and there are many women who could do the work that we should be trying to recruit, motivate, and engage, even if it requires us to change our own cultures and careers. Computing has a bright future, and I predict that most applications of computing in our lives are still to be invented. We need a diverse range of people to meet that future, and change in our culture and careers would be healthy.

The situation is different with respect to academia. The article linked below points out that women are turned off to careers in academia are greater rates than men. Other recent work suggests that students in doctorate programs lose interest in academia the longer that they are in it. There should be more women in academia, and academia cultures and careers should change to be more attractive to a broader range of qualified applicants. But what could make that happen? In contrast to the computing industry, academia isn’t growing. The economics in academia are changing, and there will be fewer academic jobs (especially in CS). I still believe that we ought to ramp up CS faculty hiring, in order to offer computing to more people (even everyone) on campus, but the economics and organizational trends are against me. If we were to hire in academia, we should make an effort to draw in more women and more under-represented minorities. We absolutely should strive to improve the culture and career prospects in academia to retain the (relatively little) diversity that we now have in academia. But neither hiring nor retention are at the top of academia’s concerns right now. Maybe the young scientists are wise to seek other opportunities, and PhD students are figuring out that academia may not hold great career prospects?

Young women scientists leave academia in far greater numbers than men for three reasons. During their time as PhD candidates, large numbers of women conclude that (i) the characteristics of academic careers are unappealing, (ii) the impediments they will encounter are disproportionate, and (iii) the sacrifices they will have to make are great. Men and women show radically different developments regarding their intended future careers. At the beginning of their studies, 72% of women express an intention to pursue careers as researchers, either in industry or academia. Among men, 61% express the same intention.By the third year, the proportion of men planning careers in research had dropped from 61% to 59%. But for the women, the number had plummeted from 72% in the first year to 37% as they finish their studies.

Deepa Singh
Business Developer
Web Site:-http://www.gyapti.com
Blog:- http://gyapti.blogspot.com
Email Id:-deepa.singh@soarlogic.com

Interactive eBook from Runestone Interactive: A Python eBook with IDE and visualization built-in

Brad Miller and David Ran um have opened up their eBook for general use at their new http://interactivepython.org site. This is the book whose use we have been studying for the last year as part of our CSLearning4U effort. It’s a great alternative to the Udacity/Courser model of distance education, to make a book more like a course, rather than capture the course in video. Our paper on this analysis just got rejected, so I’m not sure when and where we can tell the story of what happened, but I’m hoping that we can talk about it soon.

Its fun to see my sabbatical project getting loose the wild. It is always a bit scary to work on something creative and new and then let other people play with it and respond to it. Such is the case with the new eBook I worked on during my sabbatical. Unlike other eBooks that you may be aware of, this book — in the words of Emeril — “kicks it up a notch”. Using some cool open source java script code that I've had to modify and bend a bit for my own use this book allows the reader to try their hand at Python right in the book. Examples are fully runnable in two different ways. Each section has an accompanying video. My co-author, David Ranum and I are using this book in class this Fall and its fun to see how the students interact with the book.  We've had none of the usual Fall frustration at getting Python installed on students machines. You can have a look at the book here.

via Reputable Journal, How to Think Like a Computer Scientist – Interactive Edition.I try to be careful when talking about new, not-yet-published work here, because it annoys reviewers when they can easily discern the authorship of a “blind review” paper. In CS Ed, the identity of *any* work can be easily determined within five minutes of Googling/Binging — there are just too few people in the field. Still, reviewers downgrade our scores because I “broke faith” by talking about the work in my blog. Sigh. On a more positive note, we got three papers accepted to ICER 2012, so I do plan to talk about that work here soon.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Instructional Design Principles Improve Learning about Computing: Making Measurable Progress

I have been eager to write this blog for months, but wanted to wait until both of the papers had been reviewed and accepted for publication. Now “ Sub goals Improve Performance in Computer Programming Construction Tasks” by Lauren Marguerite  Richard Castration  and Mark Guzdial has been accepted to the educational psychology conference EARLI SIG 6 & 7, and “Sub goal-Labeled Instructional Material Improves Performance and Transfer in Mobile Application Development” by the same authors have been accepted into ICER 2012. Richard Catrambone has developed a sub goal model of learning. The idea is to express instructions with explicit sub goals (“Here’s what you’re trying to achieve in the next three steps”) and that doing so helps students to develop a mental model of the process. He has shown that using sub goals in instruction can help with learning and improve transfer in domains like statistics. Will it work with CS? That’s what his student Lauren set out to find out.

She took a video that Barb had created to help teachers learn how to build apps with App Inventor. She then defined a set of sub goals that she felt captured the mental model of the process. She then ran 40 undergraduates through a process of receiving sub goal-based instruction, or not: In the first session, participants completed a demographic questionnaire, and then they had 40 minutes to study the first app‘s instructional material. Next, participants had 15 minutes to complete the first assessment task. In the second session, participants had 10 minutes to complete the second assessment task, which measured their retention. Then participants had 25 minutes to study the second app‘s instructional material followed by 25 minutes to complete the third assessment.

An example assessment task: Write the steps you would take to make the screen change colors depending on the orientation of the phone; specifically, the screen turns blue when the pitch is greater than 2 (hint: you’ll need to make an orientation sensor and use blocks from “Screen 1” in My Blocks). Here’s an example screenshot from one of Barb’s original videos, which is what the non-sub goal group would see:


This group would get text-based instruction that looked like this:
  1. Click on “My Blocks” to see the blocks for components you created.
  2. Click on “clap” and drag out a when clap.Touched block
  3. Click on “ clap Sound” and drag out call clap Sound.Play and connect it after when clap.Touched
The sub goal group would get a video that looks like this:


That’s it — a call out would appear for a few second to remind them of what sub goal they were on. Their text instructions looked a bit different:

Handle Events from My Blocks
  1. Click on “My Blocks” to see the blocks for components you created.
  2. Click on “clap” and drag out a when clap.Touched block
Set Output from My Blocks
  1. Click on “clapSound” and drag out call clapSound.Play and connect it after when clap.Touched
You’ll notice other educational psychology themes in here. We give them instructional material with a complete worked example. By calling out the mental model of the process explicitly, we reduce cognitive load associated with figuring out a mental model for themselves. (When you tell students to develop something, but don’t tell them how, you are making it harder for them.) Here’s a quote from one of the ICER 2012 reviewers (who recommended rejecting the paper): “From Figure 1, it seems that the “treatment” is close to trivial: writing headings every few lines. This is like saying that if you divide up a program into sections with a comment preceding each section or each section implemented as a method, then it is easier to recall the structure.”

Yes. Exactly. That’s the point. But this “trivial” treatment really made a difference!
  • The subgoal group attempted and completed successfully more parts (subgoals) of the assessment tasks and faster — all three of those (more subgoals attempted, more completed successfully, and time) were all statistically significant.
  • The subgoal group completed successfully more tasks on a retention task (which wasn’t the exact same task — they had to transfer knowledge) one week later, again statistically significantly.
But did the students really learn the mental model communicated by the subgoal labels, or did the chunking things into subgoals just make it easier to read and parse? Lauren ran a second experiment with 12 undergraduates, where she asked students to “talk-aloud” while they did the task. The groups were too small with the second experiment to show the same learning benefits, but all the trends were in the same direction. The subgoal group were still out-performing the non-subgoal groups, but what’s more they talked in subgoals! I find it amazing that she got these results from just one hour sessions. In one hour, Lauren’s video taught undergraduate students how to get something done in App Inventor, and they could remember and do something new with that knowledge a week later — better than a comparable group of Georgia Tech undergraduates seeing the SAME videos (with only callout differences) doing the SAME tasks. That is efficient learning.

Here’s a version of a challenge that I have made previously: Show me pedagogical techniques in computing education that have statistically significant impacts on performance, speed, and retention, and lead to developing a mental model of (even part of) a software development process. What’s in our toolkit? Where is our measurable progress? The CMU Cognitive Tutors count, but they were 20-30 years ago and (unfortunately) are not part of our CS education toolkit today. Alice and Scratch are tools — they are what to teach, not how to teach. Most of our strong results (like Pair Programming, Caspersen’s STREAMS, and Media Computation) are about changing practice in whole courses, mostly forundergraduates, over several weeks. Designing instruction around subgoals in order to communicate a mental model is a small, “trivial” tweak, that anyone can use no matter what they are teaching, with significant wins in terms of quality and efficiency. Instructional design principles could be used to make undergraduate courses better, but they’re even more critical when teaching adults, when teaching working professionals, when teaching high school teachers who have very little time. We need to re-think how we teach computing to cater to these new audiences. Lauren is showing us how to do that.

One of the Ed Psych reviewers wrote, “Does not break new ground theoretically, but provides additional evidence for existing theory using new tasks.” Yes. Exactly. This is no new invention from an instructional design perspective. It is simply mapping things that Richard has been doing for years into a computer science domain, into “new tasks.” And it was successful. Lauren is working with us this summer, and we will be trying it with high school teachers. Will it work the same as with GT undergraduates? I’m excited by these results — we’re already showing that the CSLearning4U approach of simply picking the low-hanging fruit from educational psychology can have a big impact on computing education quality and efficiency. (NSF CE21 funds CSLearning4U. Lauren’s work was supported by a Georgia Tech GVU/IPaT research grant. All the claims and opinions here are mine, not necessarily those of any of the founders.)

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

A role for Udacity: Filling the holes from formal computing education

Two recent blog posts that are pointing out an interesting need.. First, from “Gas Stations without Pumps,” a discussion about how teaching writing and programming are similar in importance and the difficulty of doing it well. There is a strong temptation to throw the problem over the fence to a small group of experts (writing instructors or computer science lecturers) teaching first-year classes. That happened in most universities to writing instruction over the past 2 decades, with the result that students write very few papers after their freshman year in most majors, and almost never get detailed feedback on them. It is happening in computer science also, except that the freshman CS courses already do not provide any feedback on programming style other than whether things compile and work on a few test cases. (That’s like checking English papers for word count, word length, and sentence length, but not for content—sort of what scoring of SAT essays is like.)

via Programming and writing: two fundamentals « Gas station without pumps. Next, from a new blog that I just discovered: A post from “Run(),” which talks about how Udacity is helping a long-time programmer become a better programmer. The first post is pointing out how formal education is failing future programmers, because it’s not providing enough to develop real expertise. The second post is agreeing, but pointing out that maybe that’s the role of Udacity. I’m not arguing that Udacity or Coursera is dealing with teaching novices to code well — maybe it’s possible to do that via crowd-sourcing, but I don’t really see them filling that role now. I do see the possibility of Udacity of filling other holes in formal computing education, like seeing multiple languages, which doesn’t happen much now.

It showed to me that there are many people out there programming without truly understanding the essence of programming. I would bet that there are many out there just like Rick, who dabble in programming or are self-taught programmers, who have focused most of their efforts on learning programming languages that they never realized the common logical backbone that is in so many programming languages. It does venture into a somewhat theoretical space, but I think many would stand to benefit from investing some time to understand these abstractions from the get-go. It also makes me think, once again, that you can become a better programmer if you can be exposed to at least more than one programming languages from early on– so that you are not trapped in the workings of a single mental model.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Monday 17 December 2012

Let’s call It “Computer Science” AND “Programming”: The fat line where most people will be

Who knows why meme’s start, but one of the big ones in the Computing Education blogosphere today is “Let’s not call it ‘Computer Science’ if we really mean ‘Computer Programming.’” Based on Neil Brown’s excellent response, I suspect that it’s the UK “Computing at Schools” effort that is leading to this question. If you’re going to define a computer science curriculum, you’re going to have to define “computer science.” Both Neil and Alfred Thompson do a great job of helping us define kinds of computing and understand the goals of the different kinds of curricula. I’m more interested in the assumptions in the original blog about what the “regular people” are going to do. Jason Gorman claims that 99% of people involved in computing are just “users.” ”I believe that what’s needed is a much more rounded computing education for the 99%, with IT blending seemlessly and ubiquitously into everyday lessons as well as home life.” They need computational thinking, but not programming, argues Jason. Jason sees that only 1% of students should get programming. “For the remaining 1%, of whom some might become software developers, we need programming in schools (and out of school). Lots of it. “

That sharp distinction is not how people work today. Chris Scaffidi, Mary Shaw, and Brad Myers explained this in 2007. For every software developer in the world, there are four more professionals who program, but aren’t software developers, and there are another nine other people who are programming, but don’t recognize that.
  • A lot of Jason’s 99% are going to write SQL queries. That’s where much of the world’s data lives today, and lots of people need to get to that data. SQL queries require variables, conditionals constraints, an understanding of data abstraction, and oftentimes, a model of iteration. Looks like programming to me.
  • A lot of Jason’s 99% are going to create spreadsheets: Same variables, conditionals, models of iteration. Oh, and testing. One of the common themes in all the end-user programming literature is that new programmers don’t realize how many things can go wrong in writing programs, and how much time they’ll waste in debugging if they don’t develop good testing practices. There is a real economic cost to all those end-user programmers losing productive time to bugs. Is testing in the “Computer Science” side or the “Computer Programming” side?
  • All scientists and engineers will program: Maybe just in Excel, many in MATLAB or R, and a surprisingly many in both. Greg Wilson just sent me a great paper yesterday about all the ways that scientists and engineers code. They’re not professional software developers. They use programming to achieve their goals.
  • Jason’s worldview has this giant country of “Computer Users,” and this tiny Lichtenstein of a country called “Computer Programmers” next to it. The problem isn’t that the border between them is thin, porous, and maybe more gray than well-defined. It’s a really fat line, and that’s where most professionals will live. It’s really another whole country, lying in the border, and it swamps the other two.
What people do with computing is changing, and growing. Programming is a medium, a literacy, a form of communication and expression. More and more people will use it. Jason also raises the issue that self-taught programming is just fine. Someone yelled at Alfred in his blog for not recognizing the greater value of self-taught programming. Neil Brown called it right: Emphasizing self-taught programming is another way of shutting women out of computing. I look at the issue from a literacy perspective. Some people can teach themselves to write on their own, but you can’t count on that to achieve literacy in your society. If a literacy is worth knowing, teach it. Computer programming is a literacy, and everyone should be taught it — and computer science, too.

Think of computing as a pyramid. At the base, we have computer users, who will probably make up about 99% of the pyramid. The next level up is people who write software (let’s ignore people who make computers – that’s electronic engineering, which a CS education won’t help you with), and they might account for the next 0.9% of the pyramid. Finally, at the top, are computer scientists – people who advance the concepts, design the programming languages and “push the envelope” for the 0.9% of us who write software day-to-day.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

BBC News – Google funds computer teachers and Raspberry Pis in England

Sally Finch-er came to the CS10K Professional Development workshop last week, and I asked her why she thought Google was doing this. She suggested that it’s probably because the UK doesn't gave an effort like NSF’s CS10K, so Google is trying to play that role. (Maybe the UK should try to clone Jan Cuny – if anyone can build up a nation of high school CS teachers, she can!)

He announced that Google would provide the funds to support Teach First – a charity which puts “exceptional” graduates on a six-week training programmer before deploying them to schools where they teach classes over a two-year period. Many stay on beyond that term while others pursue places at leading businesses associated with the programmer.

At present the scheme is limited to seven regions of England: East Midlands; Kent and Midway  London; North East; North West; West Midlands; and Yorkshire and Huber. “Scrapping the existing curriculum was a good first step – the equivalent of pulling the plug out of the wall” said Eric Schmidt, Chairman, Google Mr Schmidt said the donation would be used to train “more than 100 first rate science teachers over the next three years, with the majority focused on computer science”.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

The new Core Standards for English Language Arts Literacy: Implications for Computing Literacy?

I found this fascinating discussion about the new Common Core standards efforts around English Language Arts, and it got me wondering about creating an analogy. Are the parallels to the below for computing literacy? ”Students should read as much nonfiction as fiction.” What does that mean in terms of the notations of computing? Students should read as many program proofs as programs? Students should read as much code as comments? The “coherent knowledge” part seems to connect to the kinds of ideas in the CS:Principles effort. What is “close reading” of programming? I’m sure that there are not one-to-one mappings from English Language Arts to Computing, but they are interesting to think about. If this is what it means to be text literate, what does it mean to be computing literate?

Say what you will about CCSS, but there are three big ideas embedded within the English Language Arts standards that deserve to be at the very heart of literacy instruction in U.S. classrooms, with or with or without standards themselves:

1. Students should read as much nonfiction as fiction.
2. Schools should ensure all children—and especially disadvantaged children—build coherent background knowledge that is essential to mature reading comprehension.
3. Success in reading comprehension depends less on “personal response” and more on close reading of text.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Economic impact of educational research: Does computing education research matter?

Most of the computing education research papers and proposals that I read make an economic justification for the work. Sometimes the work is a response to “Rising above the Gathering Storm (RAGS),” and the goal is to generate more computing innovation to improve national competitiveness. Maybe the concern is that our modern economy needs more and better computing workers to fuel our information-driven businesses, so we are exploring novel curricula to create better learning. Maybe we want to have greater representation for women and under-represented minorities in order to provide great economic impact, so we strive toimprove student attitudes about and engagement with computing among middle and high school students. I’ve made all of these arguments myself. I recently read “Eight issues for learning scientists about education and the economy” (Journal of the Learning Sciences, 20(1), 3-49, Jan 2011) by Jeremy Roschelle, Marianne Bakia, Yukie Toyama, and Charles Patton. It has dramatically impacted my perception of these issues. Jeremy and his colleagues dive into the economic literature, to understand the education research impact that economists can actually support. The result helps me to think about why we do what we do.

To start with, economists have found that education researchers’ overall impact on the economy appears to be bounded. For example, 1/3 of all new jobs predicted by the BLS from 2006-2016 do not require formal education. Instead, they “are projected to fall into the short-term on-the-job training category.” 40% of all job openings require less than one month of on-the-job training. Most of those aren’t STEM jobs. A 2006 study “found a relatively small positive association between math and science academic achievement and economic growth.” Later studies (in 2007 and 2008) reanalyzed the data with varying results, but found that statistically significant results “which are most plausible with a 15-year time lag between educational improvement and economic benefits.” So pushing for better STEM (with Computing in there) learning might have an impact on the economy, but we won’t see it for 15 years. Part of the problem here is confounded variables. If you have a nation-state with a strong interest in developmental policies, and the political will and economic might to put those policies into place, then good things are going to happen to the economy anyway — and far sooner than 15 years.

Let’s consider the competitiveness angle, which comes up often in computing education research. There is certainly evidence that the United States test scores ranks far behind countries like Finland and Singapore. But Roschelle et al. present evidence that the US is producing enough top scientists and engineers to support innovation, and the US’s poor showing is more a factor of size than of educational quality. ”Furthermore, in the United States, is is possible to find regions the size of Singapore and Finland that also score as well as Singapore and Finland (Guarino, 2008; SciMathMN, 2008).” Our bigger challenge is to reduce the variance in scores, which is the real reason for the low overall international performance. They argue that reducing inequities in education “is good for equipping all students for not only better access to valued jobs in a knowledge economy but also for democratic participation.” If you want to make the US more competitive in terms of international test scores, then don’t worry about the overall test score average — bring the bottom up, and the average will take care of itself. However, test scores may not actually have anything to economic competitiveness, because the economists that Roschelle et al. cite don’t really believe RAGS. We have enough top engineers and scientists, and the economy shows few signs of needing more. The US innovation engine is doing just fine. In fact, Roschelle et al. point out that Singapore sends delegations to the US to figure out what we’re doing right.

The part that most influenced my thinking was Roschelle et al.’s analysis of the STEM pipeline. We imagine a pipeline where:
  • We modernize curriculum and pedagogy in K-12 which results in better prepared students and greater interest in STEM disciplines;
  • These students then achieve more in STEM and pursue undergraduate degrees;
  • Graduates with STEM degrees become scientists and engineers in the labor and academic force;
  • Which results in greater national economic development.
  • Roschelle et al. consider each phase of the pipeline:
  • Yes, better K-12 curriculum does lead to better student achievement. Teacher quality, however, may play an even larger role, and the distribution of high-quality teachers is uneven and inequitable. There is far more research effort in curriculum than teacher professional development. But even if you can improve all three of curriculum, pedagogy, and teacher quality, the results are surprisingly short lived because it’s a staged pathway, and the stages don’t communicate. (I’m reminded of Alan’s quote, “You can fix a clock, but you have to negotiate with a system.”) ”This may be because credentials, not specific higher order abilities, get students into university, and once students are there professors expect only traditional textbook learning (and correspondingly do not leverage what students have learned from more progressive curricula).” (Italics were in original.) In other words: If you were in some terrific new 4th grade curriculum where you learned to do inquiry-based learning, that might raise your test scores that year, but you’ll get into college based on your SAT and ACT scores, and your university prof won’t assume you know how to do inquiry-based learning.
  • This next part was quite surprising to me: Increasing student interest and achievement doesn’t change undergraduate STEM enrollment. ”Lowell and Salzman (2007) found that although American high school students’ exposure to math and science has increased and their standardized test scores have increased over time, their interest in pursuing science and engineering majors has been stable…In other words, even with 20 years of steady improvements at the K-12 level, no increase occurred in the percentage of university students interested in majoring in STEM fields.” (p. 23) Despite our concerns about low scores, the references in Roschelle et al. say that the slope is upward. My guess is that improving interest and achievement is necessary, but not sufficient for undergraduate STEM enrollment. If students don’t understand science and they hate it, they won’t major in it. But loving STEM or computing doesn’t mean you want a career in it. Bigger factors preventing greater undergraduate STEM degree production are poor quality college STEM education (“e.g., large, lecture-based, fast-paced classes”) and poor access to high school “gatekeeper” courses. ”Finishing a course beyond Algebra II, such as trigonometry or calculus, in high school more than doubled the probability that college-enrolled students would obtain their bachelor’s degree (Adelman, 1999.)” Getting more of those courses available involves (in part) fixing the problem of access to high-quality teachers.
  • Surprisingly many students who graduate with STEM degrees don’t stick with STEM jobs. Within 4 years, 27% of science and engineering bachelors have moved on to unrelated jobs, and the percentage increases each year.

Overall, though, Roschelle et al. tell a story in favor of the importance of computing education research. Being able to use computing “in sense making” and for “information literacy” are on several education and economics groups’ lists of 21st century skills. Learning how to measure and improve those skills are among the top recommendations of their paper. And while the pipeline is not nearly as connected as we might like, it’s possible to have long term effects. For example, the Perry Preschool Program had dramatic effects on its participant, through Age 27. Richard Hake had a related post recently. Why do we want to educate children and improve education overall? Hake argues with Roschelle et al. that competitiveness is not an important enough driver, and maybe there are even bigger issues than economics that we should be aiming toward.

Ravitch wrote: “. . . .the nation forgot that education has a greater purpose than preparing our children to compete in the global economy.” I agree with Coles and Ravitch that “global competitiveness” should not be the main driver of education reform. In a discussion list post “Is the ‘Skills Slowdown’ the Biggest Issue Facing the Nation?” at <http://bit.ly/9kIHAW>, I countered David Brooks’ claim <http://nyti.ms/LfJp1K> that it was, arguing the ”Threat to Life on Planet Earth” was the biggest issue facing the nation. Likewise, I think the “Threat to Life on Planet Earth” and NOT “global competitiveness” should be the main driver of education reform.

via LISTSERV 16.0 – AERA-L Archives. I learned from reading the Roschelle et al. paper that it is hard for computing education research to impact the overall economy, but as Hake is pointing out, too — there are more important goals for us. People need computing skills in the 21st century. Our skills can help the individuals at the bottom half of the economy become more marketable and raise their economic status (and those of their children), but more importantly, computing skills can make them better citizens in a democracy (e.g., maybe as critical thinkers, or as people who know how to explore and test claims in the newspaper and made by politicians). We do need more and better curriculum, because that does have an achievement impact, but we have a greater need to produce more and better teachers.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Arguing against Computer Programming for All

Read Write Web had a piece recently on why to have computing for everybody. It’s a nice piece to add to the other articles that have been published lately on the topic. What I found most interesting were the comments, which argued strongly against the computing-for-all perspective, suggesting instead that programming is too hard and too boring. Computing is important for everyone, but the tools may not be right yet. It isn't obvious to me that programming must be too hard, must be too boring. How much easier can we make it, and still make it useful? As I've said in your previous article, to paraphrase ARE YOU HIGH??? There is an analogy to be drawn between learning Mathematics and Computer Science. Both require a lot of abstract thinking – in different ways to be sure but nonetheless ABSTRACT THINKING. Most people find learning math to be a PAINFUL EXPERIENCE. I imagine the same will be true of computer programming. There is quite a lot of impetus to learn how to program mobile devices these days and yet the number of Computer Science majors here in America remains relatively the same. So clearly there is a substantial ability barrier to programming in any meaningful sense.

There’s also the boredom barrier. You mentioned children’s capacity to memorize endless facts about Pokemon. The difference here is that Children find Pokemon ENTERTAINING however, how does a teacher make Computer Programming entertaining?! They can’t because it’s impossible. If you start at the low level or even if you start at the windows UI level it simply is boring as hell. Average children will not be able to focus their attention on the programming subject. Learning spoken languages are completely different because the student is most likely interacting with someone who is already speaking the target language. That interactivity maintains their learning focus. Plus, when learning a spoken language there is ALWAYS context, you are referring to everyday persons, places and things which the student already has experience of. With Computer Programming sometimes there are concepts that have no context whatsoever and it makes it almost impossible to memorize. And in the case of the rightfully reviled Microsoft there are points where programming structures directly contradict themselves but it’s OK because the Compiler is programmed to catch that particular situation. Maybe it’s a glass is half empty or glass is half full point of view problem, but most people simply don’t have the intellectual capacity to learn computer science/computer programming. Learning a spoken language is FAR EASIER.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Friday 30 November 2012

Congratulations to Stephen Edwards and Virginia Tech: An endowed chair for innovation in engineering education

I’ve never heard of an endowed chair for engineering education at a research-intensive university. Bravo to Virginia Tech for creating such a position (and his colleagues for recommending him), and congratulations to Stephen Edwards for receiving it! A well-deserved honor! At its June meeting, Virginia Tech’s Board of Visitors confirmed the appointment of Virginia Tech’s Stephen Edwards, associate professor of computer science, as the new recipient of the W.S. “Pete” White Chair for Innovation in Engineering Education, effective Aug. 10, 2012.

The W.S. “Pete” White Chair for Innovation in Engineering Education was established by American Electric Power to honor Pete White, a 1948 graduate of Virginia Tech, and to encourage new interest in the teaching of engineering and improve the learning process.Edwards’ colleagues in the computer science department submitted the recommendation on his behalf. Cal Ribbens, the department’s associate head for undergraduate studies, cited Edwards as “easily one of the most innovative and energetic faculty members I have known in my 25 years at Virginia Tech.”

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Disappointing Support for new NRC Framework for Science Standards from P21

I received the below statement via email, and I found it somewhat disappointing. Wholehearted support for the NRC Science Standards even though they ignorecomputing? From companies like Intel and Cisco? I had not heard of P21previously, and wonder what if there’s any connection between this group and Computing in the Core. My guess is that there isn’t, but there probably should be.P21’s statement on new framework for voluntary Next Generation Science Standards Washington, D.C. – June 5, 2012 – The Partnership for 21st Century Skills, P21, the leading national organization advocating for 21st century readiness for every student believes the National Research Council’s new framework for science standards offers an exciting new vision for 21st century teaching and learning.

The Partnership for 21st Century Skills commends the National Research Council for its Leadership States and partners developing the Next Generation Science Standards. P21 recognizes that the fields of science and engineering represent not just leading sources for economic advancement, but serve as dynamic platforms for pursuing new knowledge that can lead to a love of learning and support the development of the 4Cs – creativity, collaboration, communication and critical thinking. This conceptual framework can begin to reshape what students need to know and be able to do in order to cultivate 21st century leaders in science and citizenship. P21 particularly recognizes the conceptual shifts in the NGSS as well as the inclusion of the science and engineering practices in this new approach to standards development. 

The conceptual shifts emphasize real world interconnections in science, interdisciplinary integration across core subjects, and conceptual coherence from kindergarten through 12th grade, each of which aligns with P21’s approach to 21st century teaching and learning. More importantly, they emphasize not just the acquisition, but the application of content. P21 is pleased to see the NRC and the Leadership States embrace these shifts as each one is critical to preparing students for life and careers in the 21st century. The eight science and engineering practices also directly align with elements of the P21 Framework. From asking questions and defining problems to using models, carrying out investigations, analyzing and interpreting data, designing solutions and using evidence, these practices form the essential elements of the critical thinking and problem solving components of the P21 Framework. In addition, P21 commends the NGSS for recognizing the importance of communicating information as a scientific practice. 

Collaboration and teamwork are essential for academic and career success; therefore, P21 is pleased to see that the requirement for collaboration and collaborative inquiry and investigation begin in kindergarten and extend throughout the standards.P21 looks forward to working with the NRC, the P21 Leadership States and partners to ensure the next steps in this process of creating science standards continue to value not only content knowledge but also necessary skills for growth and success in the 21st century workplace. About P21: P21 is a national organization that advocates for 21st century readiness for every student. As the United States continues to compete in a global economy that demands innovation, P21 and its members provide tools and resources to help the U.S. education system keep up by fusing the 3Rs and 4Cs (critical thinking and problem solving, communication, collaboration and creativity and innovation). While leading districts and schools are already doing this, P21 advocates for local, state and federal policies that support this approach for every school.

P21 Members: Adobe Systems, Inc., American Association of School Librarians, Apple Inc., Cable in the Classroom, Cengage Learning, Cisco Systems, Inc., The College Board’s Advanced Placement Program (AP), Crayola, Dell, Inc., EdLeader21, EF Education, Education Networks of America, Ford Motor Company Fund, GlobalScholar, Goddard Systems Inc., Hewlett Packard, Intel Corporation, Knovation, KnowledgeWorks Foundation, LEGO Group, Mosaica Education, National Academy Foundation, National Education Association, Pearson, Project Management Institute Educational Foundation, The Walt Disney Company, Wireless Generation, Verizon Foundation, and VIF International Education.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Stretching your mind: Arguing for multiple programming languages for designers

Nice piece from Eugene Wallingford on Venkat Subramaniam’s talk at JRubyConf 2012. Reminds me of Janet Murray’s argument for why designers should learn programming, and about the BLS data saying that we need more program designers. Subramaniam began his talk by extolling the overarching benefits of being able to program in many languages. Knowing multiple programming languages changes how we design software in any language. It changes how we think about solutions. Most important, it changes how we perceive the world. This is something that monolingual programmers often do not appreciate. When we know several languages well, we see problems — and solutions — differently.

Why learn a new language now, even if you don’t need to? So that you can learn a new language more quickly later, when you do need to. Subramaniam claimed that the amount of time required to learn a new language is inversely proportional to the number of languages a person has learned in last ten years. I’m not sure whether there is any empirical evidence to support this claim, but I agree with the sentiment. I’d offer one small refinement: The greatest benefits come from learning different kinds of language. A new language that doesn’t stretch your mind won’t stretch your mind.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Carl Wieman on Effective Teaching

This is a really nice piece on a lecture by Carl Wieman, whom I have mentioned previously. In one page, the summary hits most of the key ideas in How People Learn. “Memory is not talked about much in education, but it is critically important,” Wieman said, and the limited discussion that does occur focuses primarily on long-term memory while short-term working memory is ignored.

He compared the latter to a personal computer with limited RAM. “The more it is called upon to do, to remember, the harder it is to process. The average human brain [working memory] has a limit of five to six new items, it can’t handle anything more.”

A new item is anything that is not in the learner’s long-term memory, he continued. “Anything you can do to reduce unnecessary demands on working memory will improve learning.”Among them is elimination of unnecessary jargon. Wieman asked: “That new jargon term that is so convenient to you, is it really worth using up 20% of the mental processing capacity of the students for that class period?” Demands of working memory can also be reduced by shifting some learning tasks, particularly transfer of simple information from the classroom to pre-reading assignments and homework.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Good academic leadership as a model for good teaching

There’s a Facebook meme making the rounds:


I am no expert on management or leadership. A management expert may look at the above chart and shake her head sadly about the misconceptions of the commonsense view of management. Nonetheless, the chart sets up an interesting dichotomy that is worth exploring, in relation to academia and then to teaching.The abrupt firing of President Teresa Sullivan from the University of Virginia raises questions about academic leadership and its goals. The below quote from a Slatearticle on her ouster suggests that she fit under the “Leader” column above:

The first year of Sullivan’s tenure involved hiring her own staff, provost, and administrative vice president. In her second year she had her team and set about reforming and streamlining the budget system, a process that promised to save money and clarify how money flows from one part of the university to another. This was her top priority. It was also the Board of Visitor’s top priority—at least at the time she was hired. Sullivan was rare among university presidents in that she managed to get every segment of the diverse community and varied stakeholders to buy in to her vision and plan. Everyone bought in, that is, except for a handful of very, very rich people, some of whom happen to be political appointees to the Board of Visitors. (emphasis added) via Teresa Sullivan fired from UVA: What happens when universities are run by robber barons. – Slate Magazine.

I have known academic leaders like this. Jim Foley is famous at Georgia Tech for generating consensus on issues. My current school chair (ending his term this month) does a good job of engaging faculty in conversations and listening — he doesn’t always agree, but faculty opinions have swayed his choices. Eugene Wallingford has written a good bit about how to live on the right side of the chart. I am sure that all of us in academics have also met one or more academic bullies who land more often in the left column:
The self-righteous bully is a person who cannot accept that they could possibly be in the wrong. They are totally devoid of self-awareness and neither know nor care about the impact of their behaviour on other people. They are always right and others are always wrong. R. Namie and G. Namie (2009) described bullies as individuals who falsely believed they had more power than others did…They tend to have little empathy for the problems of the other person in the victim/bully relationship.The bosses vs. leaders chart at the top of this post is about leadership, but it’s also about teaching. The common view of the undergraduate teacher veers toward the “boss” and “bully” characterizations above. We are “authorities.” The education jobs in academia are often called “Lecturers” or “Professors.” We lecture or profess to students — we tell them, we don’t ask them. We “command” students to complete assignments. We strive to make our lectures “always right.”

The best teachers look more like the right side of the chart at the top. From what we know about learning and teaching, a good teacher does “build consensus.” We don’t want to just talk at students — we want students to believe us and buy into a new understanding. One of my favorite education papers is “Cognitive Apprenticeship” which explicitly talks about how an effective teacher “models/shows” a skill, and “develops” and “coaches” students. The biggest distinction between a “boss/bully” teacher and a “leader” teacher is listening to students. A good teacher “asks” them for students’ goals and interpretations. How People Learn emphasizes that we have to engage students’ prior understanding for effective learning. A good teacher sympathizes with the students’ perspectives, then responds not with a canned speech, but with a thoughtful response (perhaps in the form of an activity, not just a lecture) that develops student understanding.I saw Eric Roberts receive the IEEE Computer Society Taylor L. Booth Education Award last week. I told him that I was eager to try a teleprompter for the first time. Eric said that he wouldn’t. He said that he would respond to the moment, the audience, and the speeches of the previous recipients. He would use the adrenalin of the moment to compose his talk on the fly. (Eric’s a terrific speaker, so he can pull that off better than me.) He told me that it was the same as in class — he listens and responds to the students.

At the end of this week, I’m heading off to Oxford where I’ll teach in our study abroad program there. It will be Georgia Tech students and Georgia Tech faculty, but physically, in Oxford. I’ll be teaching two classes: Introduction to Media Computation in Python (for my first time in seven years!) and Computational Freakonomics. I’ve taught at Oxford Study Abroad twice before, and loved it. Sure, Oxford is fabulous, but what I most enjoyed my past times (and what I most look forward to this time) is the teaching experience. I have 22 students registered in Media Comp (typically 150-300/semester at Georgia Tech, depending on the size of the lecture halls available), and 10 students in Comp Freak. We will meet for 90 minutes a day (each class, so 3 hours a day for me), four days a week. It’s an immersive experience. We will have meals together. Last times, I had “office hours” at my kitchen table, and in impromptu meetings at a lab after dinner.In enormous lecture halls with literally hundreds of students, it’s not always easy to be a “leader.” It’s easier in those settings to be the “boss” (even the “bully”), professing what’s right and ordering students to do their work. In a setting like Oxford with smaller classes and more contact, I will have more opportunity to listento my students, and the opportunity to develop my skills as a leader/teacher.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Sunday 4 November 2012

How do we encourage retention of knowledge in computing?

The scenario described in the experiment below has been repeated many times in the education literature: Students are asked to read some material (or listen to a lecture), and are then asked to do something with that material (e.g., take a quiz, write down everything they can remember, do a mind-mapping exercise), and some time later, they take a test to measure retention. In the experiment described below, simple writing beat out creating a mental map. Interesting, but it’s an instance of a case that I wanted to raise.This pattern of information+activity+retention is common, and really does work. Doing something with the knowledge improves retention over time.

So how do we do this in computer science? What do we ask our students to do after lecture, or after reading, or after programming, to make it more likely that they retain what they learned? If our only answer is, “Write more programs,” then we missed the point. What if we just had our students write down what they learned? Even if it was facts about the program (e.g., “The test for the sentinel value is at the top of the loop when using a WHILE”), it would help to retain that knowledge later. What this particular instance points out is that the retention activity can be very simple and still be effective. Not doing anything to encourage retention is unlikely to be effective.

But two experiments, carried out by Dr Jeffrey Karpicke at Purdue University, Indiana, concluded that this was less effective than constant informal testing and reciting.Dr Karpicke asked around 100 college students to recall in writing, in no particular order, as much as they could from what they had just read from science material.Although most students expected to learn more from the mapping approach, the retrieval exercise actually worked much better to strengthen both short-term and long-term memory.The results support the idea that retrieval is not merely scouring for and spilling out the knowledge stored in one’s mind — the act of reconstructing knowledge itself is a powerful tool that enhances learning about science.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

E-mails show UVa board wanted a big online push: McLuhan rolls over in grave

Released emails suggest that one of the reasons that the University of Virginia’s Board of Visitors ousted President Teresa Sullivan was that she was resistant to online education Various theories have been traded among UVa-watchers in the last 10 days about the source of conflict between Sullivan and the board, and the e-mail records suggest that online education may have been among them. In her statement on the day the board announced Sullivan’s departure, D ragas used language similar to some of the columns that were being shared among board members, saying “We also believe that higher education is on the brink of a transformation now that online delivery has been legitimized by some of the elite institutions.”

Sullivan is not quoted at length in the e-mail files that were released, but one from an alumnus/donor to King ton says that Sullivan provided a “pedestrian” answer to a question about how UVa was embracing the online education revolution. Sullivan is not responding to press inquiries at this time, but sources familiar with discussions she has had on distance education said that she viewed it as an important trend, but had expressed skepticism about the idea that it was a quick fix to solving financial problems, and that she viewed distance education as having the potential to cost a lot of money without delivering financial gains. Sources also said she viewed distance education as an issue on which faculty input was crucial.

via E-mails show U.Va. board wanted a big online push | Inside Higher Ed. I’m just back from the ACM Education Council meeting, where Mehran Sahami put together a stellar panel on the topic of on-line education (also covered in Lisa K’s blog):
  • Woodies Flowers (MIT) who supports on-line training but believes that real education likely requires some “presence.” I mentioned previously that he’s been critical of MIT’s edX initiative. He emphasized the need to have higher quality educational software, using Avatar as his exemplar.
  • Candice Hillel (Carnegie Mellon University) who heads OLI and had the best research support for the forms of online education that they’re developing. She started with a great quote from Herb Simon, “Improvement in post-secondary education will require converting teaching from a solo sport to a community-based research activity.” She emphasized the team approach they use to build their software.
  • John Mitchell (Stanford) who leads the online education effort there. He led the charge in implying enormous changes for higher education. ”Will community colleges survive? How? Will college teaching follow the path of journalism?”
  • Peter Vigor (head of research at Google) who co-taught the 100K student on-line AI course was honest and pragmatic. He started on this because he wanted to do more than a book. He felt that the students really felt a “personal connection” with him, but when pressed, agreed that we don’t actually have much evidence of that. He sees the biggest role of these online courses is for updating skills and re-training. He says that the technology just isn't good enough yet. For example, the current tools don’t really respond to feedback — they’re linear experiences with no remediation or mechanisms for providing missing background knowledge.
  • Dave Patterson (Berkeley) who taught a MOOC (Massive Open On-line Course) on programming Web services. He was honest about the limitations of MOOCs, but still convinced that this is the beginning of the end for existing higher education. He pointed out that he also had a 90% dropout rate. He was the first MOOC teacher I've heard admit to “unbounded, worldwide cheating.” They were going to use plagiarism detection software, just to see how much cheating was going on, but they didn't need to. Large numbers of answers were “bit identical.”
One of the most important points for me was when Eric Roberts of Stanford pushed back against the flood of support for MOOCs, pointing out the costs of on-line education in terms of their impact on small schools, on general (especially legislators’) perception of the role of higher-education, and on what we teach (e.g., the media might encourage us to teach what we can easily do in these on-line forms, as opposed to what we think is important). ”Does ‘free’ wipe-out other things with demonstrable value?” Dave Patterson responded saying, “It doesn't matter. It’s going to happen.”

I thought I heard McLuhan rolling over in his grave. ”Media choices don’t matter?!?” But as I thought about it some more, it was less obvious to me which side McLuhan would fall on. On the one hand, McLuhan (in Understanding Media) argued that we should be aware of the implications of our media, of how our media change us. That view of McLuhan suggests that he would side with Eric, in thinking through the costs of the media, and he would be furious that Dave was unwilling to consider those implications. On the other hand, McLuhan would agree with Dave that media do obsolete some things (even things we value) while enhancing other things, and these media effects do just “happen.” Are we as a society powerless to choose media, to avoid those with effects that we dislike?

I see what happened at UVa to be about this question exactly. It’s not obvious to me that the MOOC efforts are better than existing higher education, in terms of reach into society, in terms of effectiveness for learning, and in terms of constructing the society we want. They serve a need, but they don’t replace colleges (as of yet). Teresa Sullivan’s concerns expressed above are well-founded, and she was wise to be hesitant. On the other hand, as Dave Patterson said, “It’s going to happen.” The UVa President may have been run-over because she didn't hop on the train fast enough for her Board of Visitors. Can we consider and choose our media, based on the implications we want, or must we accept the new media as inevitable and get pushed out of the way if we don’t embrace those media — even though those media could possibly destroy the institutions we believe serve an important need?

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Ben Chun asks, “What is the CS Education ask?”




Ben Chun posts an interesting article critiquing the NSF CS10K project, which is worth reading. (Thanks to “Gas stations without pumps” through which I first heard about Ben’s post.) i don’t agree with all of it — I’m not sure that it’s such a significant concern that the papers describing the CS10K project are “behind a paywall.” — most of the information is readily available at the CS:Principles site(and I believe that the articles from the recent InRoads will be made available soon). But his main point is a valid one: This is a huge project, and it’s not obvious that it’s even possible, let alone whether it’ll be successful. He asks what specific policy changes are necessary. I don’t think anybody knows, because it’s not knowable in a general sense. Policy changes that impact high schools have to be made on a state-by-state basis. I know what we have done and would like to do in Georgia, and I know what’s going in Massachusetts, South Carolina, and California, but all four of those are completely different. Ben calls the desired policy changes “a unicorn,” but I think it’s closer to “that animal I can hear in the other room, thumping around, but can’t tell what it is yet.” I also agree that we need to figure out how to engage the whole community. I believe that that is happening, through CSTA Chapters and efforts like the AP attestation. I don’t know how to make it happen faster or more broadly, but I do believe that NSF is bringing together a team of people who do.

I say that because if you’re actually putting together a “large-scale, collaborative project bringing together stakeholders from wide-ranging constituencies”, you don’t bury all the information about it behind a paywall. I happen to be teaching at UC Berkeley this summer, but otherwise I wouldn’t even have access to the paper that describes the CS10K project. And I think I’m the kind of person that might be able to help. I actually teach high school computer science! I want more colleagues! I believe CS education is vitally important for young people! The fact that the first result for “cs10k” in Google takes you nowhere is a problem. The lack of open, public discussion of the issues and plans is a problem. The lack of savvy about engaging the whole community — including high school teachers and administrators — is a problem.

But dire as it is, that’s not the biggest problem. The biggest problem is that we don’t agree on what we’re asking for. It’s not that we disagree. We just have no idea. But at least the goal has been made clear, even if not effectively publicized: A new AP course in 10,000 high schools by 2015. (Or maybe 2016 or 2017, I now hear.) In 2011, there were only 2,667 high schools in the world with students taking the AP Computer Science A exam. Today, I think there are about 2,100 high schools authorized to offer the course in the US (not that all of them actually do). There are about 40k total public and private high schools in the US

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Inventing a Worked Examples and Self-Explanation Method for CS Courses

I sent this idea to the media comp-teach mailing list, and got a positive response. I thought I’d share it here, too. I’m trying a worked examples + self-explanations approach in my Media Computation Python class that started Monday (first time I've taught it in seven years!) and in my “Computational Freak economics  class (first time I've taught it in six years). Whether you’re interested in this method or not, you might like to use the resource that I've created. As I mentioned here, I’m fascinated by the research on worked examples and on self-explanations. The idea behind worked examples is that we ought to have students see more fully worked out examples, with some motivation to actually study them. The idea behind self-explanations is that learning and retention is improved when students explain something to themselves (or others), in their own words. Pete Rollins did studies where he had students use worked examples to study computer science (explicitly, recursion), and with Mimi Pecker, prompted CS students to self-explain then studied the effect. In their paper, Rollins and Pecker found:

“Improvement in skill acquisition is also strongly related to the generation of explanations connecting the example material to the abstract terms introduced in the text, the generation of explanations that focus on the novel concepts, and spending more time in planning solutions to novel task components. We also found that self-explanation has diminishing returns. Here’s the critical idea: Students (especially novices) need to see more examples, and they need to try to explain them. This what I’m doing at key points in the class:
Each team of two students gets one worked example in class. They have to type it in (to make sure that they notice all the details) and explain it to themselves – what does it do? how does it work? Each team then explains it to the teams on either side of them At the end of the class, each individual takes one worked example, and does the process themselves: Types it in, pastes it into a Word document (with an example of the output), and explains what the program does. I very explicitly encourage them to do with this others, and talk about their programs with one another. I want students to see many examples, and talk about them.

Sure, our book has many examples in it, but how many students actually look at all those examples? How many type them in and try them? Explain to themselves? I’m doing this at four points in the Media Comp class: for images with get Pixels, images with coordinates, sounds, and text and lists. For my Comp Freak class, students are supposed to have had some CS1, and most of them have seen Python at least once, so I’m only doing this at the beginning of the class, and only on text and lists. There are 22 students in my Media Comp class, so I needed 11 examples in class, then 22 examples one-for-each-person. Round it off to 35 examples. That’s 140 working examples. A lot of them vary in small ways — that’s on purpose. I wanted two teams to say, “I think our program is doing about the same thing as yours — what’s different?”

I did discover some effects that surprised me. For example, try this:def change sound(sound): for sample in get Samples(sound): value = getSampleValue(sample) if value > 0: setSampleValue(sample, 4 * value) if value <= 0: setSampleValue(sample,0) Turns out if you zero out all the negative samples, you can still hear the sound pretty clearly. I wouldn't have guessed this. Whether you want to try this example-heavy approach or not, you might find useful all these examples. I've put all 140 examples on the teacher Media Comp sharing site (http://home.cc.gatech.edu/mediacomp/9 – email me if you want the key phrase and don’t have it). I started creating these in Word, but that was tedious to format well. I switched to LaTeX, because that nicely formatted the Python without much effort on my part. I've uploaded both the PDF and the La TeX, since the La TeX provides easy copy-paste text.

My Comp Freak students are doing their assignment now (due tonight), and we just did it for the first time in the Media Comp class today (the take-home portion due in two days). I was pleased with the feedback. I got lots of questions about details that students don’t normally ask about at the second lecture (e.g., “make Color is doing something different than set Red, set Green, and set Blue differently? What’s the difference between colors and pixels?”). My hope is that, when they start writing their own code next week, they won’t be stymied by stupid syntax errors, because they will have struggled with many of the obvious ones while working with complete code. I’m also hoping that they’ll be more capable in understanding (and thus, debugging) their own code. Most fun: I had to throw the students out of class today. Class ended at 4:10, and we had a faculty meeting at 4:30. Students stayed on, typing in their code, looking at each others’ effects. At 4:25, I shooed them off.

I am offering extra credit for making some significant change (e.g., not just changing variable names) to the example program, and turning that in, too (with explanation and example). What I didn't expect is that they’re relating the changes to code we've talked about, like in this comment from a student that just got turned in: “I realized I made an error in my earlier picture so I went back and fixed it. I also added in another extra credit picture. I made a negative of the photo. It looks pretty cool!” It’s interesting to me that she explicitly decided to “make a negative” (and integrated the code to do it) rather than simply adding/changing a constant somewhere to get the extra credit cheaply. All my Media Comp students are Business and Liberal Arts students (and is 75% female — while Comp Freak is 1 female and 9 males). I got a message from one of the Media Comp students yesterday, asking about some detail of the class, where she added: “We all were pleasantly surprised to have enjoyed class yesterday!” I take the phrase “pleasantly surprised” to mean that the expectations are set pretty low.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com