Saturday, 16 March 2013

Computational Thinking in K–12: A report in Ed Researcher

Shuchi Grover and Roy Pea (Stanford) have a review of the field of computational thinking in K-12 schools in this month’s Educational Researcher. It’s a very nice paper. I’m excited that the paper is published where it is! Educational Researcheris the main publication venue for the largest education research organization in the United States (American Educational Research Association). Roy has been doing work in computing education for a very long time (e.g., “On the prerequisites of learning computer programming,” 1983, Pea and Kurland). This is computational thinking hitting the education mainstream.

Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Summer Camps in Georgia: Roll-up Report and Invitation to Play with Data (SECOND TRY)

I had posted this blog piece back in January, but then was asked to take it down. There were concerns that the data were not anonymized enough to guarantee participant anonymity. Tom McKlin did a great job of working with the Human Subjects Review board here at Georgia Tech, to figure out a set of data that would be useful to other computing education researchers, but would guarantee participant anonymity (to the extent feasible). Here’s our newly approved data set.

Our external evaluators (The Findings Group) has just produced the roll-up analysis for all the GaComputes related summer camps from Summer 2012. These include camps offered at Georgia Tech, and those offere elsewhere in the state, started by GaComputes seed grants (as described in the 2011 SIGCSE paper that I blogged about). The results are strong:
  • Over 1,000 K-12 students participated statewide.
  • The camps were even more effective with women than men.
  • There was a statistically significant improvement in content knowledge for Scratch, Alice, and App Inventor, across genders, ethnic groups, and grade levels.
  • “The computing camps were particularly effective at increasing students’ intent to pursue additional computing, self‐efficacy in doing computing, and sense of belonging in computing.”
  • “Minority students reported significantly more growth in their intent to persist in computing than majority students.”
The Findings Group had a particularly interesting proposal for the Computing Education Research community. They are making all the survey data from all the camps freely available, in an anonymous form. They have a sense that there is more to learn from these data. It’s a lot of students, and there’s a lot to explore there in terms of motivation, engagement, and learning. If you play with these data, do let us know what you learn!


Deepa Singh
Business Developer
Web Site:-http://www.gyapti.com
Blog:- http://gyapti.blogspot.com
Email Id:-deepa.singh@soarlogic.com

Computational Thinking in K–12: A report in Ed Researcher

Shuchi Grover and Roy Pea (Stanford) have a review of the field of computational thinking in K-12 schools in this month’s Educational Researcher. It’s a very nice paper. I’m excited that the paper is published where it is! Educational Researcheris the main publication venue for the largest education research organization in the United States (American Educational Research Association). Roy has been doing work in computing education for a very long time (e.g., “On the prerequisites of learning computer programming,” 1983, Pea and Kurland). This is computational thinking hitting the education mainstream.

Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Friday, 15 March 2013

IT Research Hearing Focuses on Security and Computing Education

A congressional committee heard about the importance of computing research, and what the committee members responded with was a need for more cyber-security and more computing education.Lazowska spoke about the NITRD program’s history and the role of computing in the US economy. He showed an NRC chart on research and IT sectors with billion dollar markets. Lazowska also talked about the need to integrate security into the building of systems and not added on at the end as a defensive measure when questioned about cybersecurity by Congressman Steven Stockman R-TX. Stockman, who credits support from the fiscally-conservative Tea Party for his election, had the quote of the hearing, when after having pressed Lazowska for an order-of-magnitude estimate on how much additional investment in fundamental cyber security research would move the needle seemed surprised that the number PITAC requested back in 2005 was “only” $90 million. “Well, I’m interested in getting you billions, not millions,” he said, indicating he was very concerned about the U.S. vulnerability to cyber attack.The Subcommittee members were very interested in how to tackle the education problem in computing as well as how they could help researchers address cybersecurity moving forward.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

CSAB and ABET/CAC Criteria Committee Survey

At the ACM Education Council meeting this last weekend, I heard about changes in the accreditation criteria being considered for computing disciplines (e.g., Computer Science, Information Systems, Information Technology). The committee has asked for feedback on several issues that they’re considering, e.g., how much mathematics do students really need in computing? That question, in particular, is one that I’m reading about in The Computer Boys Take Over by Nathan Ensmenger. Ensmenger tells the story of how mathematics got associated with preparation of programmers (not computer scientists). Mathematics showed up on the early aptitude tests that industry created as a way of figuring out who might be a good programmer. But Ensmenger points out that mathematic ability only correlated with performance in academic courses, and did not correlated with performance as a programmer. It’s not really clear how much math is really useful (let alone necessary) for being a programming. Mathematics got associated with programming decades ago, and it remains there today.

The Committee is inviting feedback on this and other issues that they’re considering.This survey was developed by a joint committee from CSAB and the ABET Computing Accreditation Commission, and is designed to obtain feedback on potential changes on the ABET Computing Accreditation Criteria. We are looking for opinions about some of the existing ideas under discussion for change, as well as other input regarding opportunities to improve the existing criteria.Respondents to the survey may be computing education stakeholders in any computing sub discipline, including computer science, information systems, information technology, and many others. Stakeholders may include professionals in the discipline, educators, and/or employers of graduates from computing degree programs.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

First Workshop on AI-Supported Education for Computer Science

Shared by Leigh Ann Sudol-DeLyser (Visiting Scholar, New York University) with the SIGCSE list.Dear SIGCSE-ers! I would like to announce the First Workshop on AI-Supported Education for Computer Science to be held at the Artificial Intelligence in Education conference this summer in Memphis and invite the submission of papers from the SIGCSE community. Please see the website at: https://sites.google.com/site/aiedcs2013/Submissions are due by April 12, 2013.

Workshop Description:

Designing and deploying AI techniques within computer science learning environments presents numerous important challenges. First, computer science focuses largely on problem solving skills in a domain with an infinitely large problem space. Modeling the possible problem solving strategies of experts and novices requires techniques that represent a large and complex solution space and address many types of unique but correct solutions to problems. Additionally, with current approaches to intelligent learning environments for computer science, problems that are provided by AI-supported educational tools are often difficult to generalize to new contexts. The need is great for advances that address these challenging research problems. Finally, there is growing need to support affective and motivational aspects of computer science learning, to address widespread attrition of students from the discipline. Addressing these problems as a research community, AIED researchers are poised to make great strides in building intelligent, highly effective AI-supported learning environments and educational tools for computer science and information technology.

Topics of Interest:
  • Student modeling for computer science learning
  • Adaptation and personalization within computer science learning environments
  • AI-supported tools that support teachers or instructors of computer science
  • Intelligent support for pair programming or collaborative computer science problem solving
  • Automatic question generation or programming problem generation techniques
  • Affective and motivational concerns related to computer science learning
  • Automatic computational artifact analysis or goal/plan recognition to support adaptive feedback or automated assessment
  • Discourse and dialogue research related to classroom, online, collaborative, or one-on-one learning of computer science
  • Online or distributed learning environments for computer science
Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Online Learning Outcomes Equivalent to Traditional Methods: But what about the drops?

This is a great result, if I can believe it. They took 605 students, some in a traditional course and some in a “hybrid” course, and did pre/post tests. They found no difference in outcomes. Here’s what I’m not sure about: What happened to those students who failed or who withdrew? Other studies have suggested that online courses have higher withdraw/failure rates. Is that the case here? There is only one footnote (page 18) that mentions withdraw/failure: “(27) Note that the pass rate in Figure 1 and Appendix Table A3 cannot be used to calculate the percentage of students who failed the course because the non-passing group includes students who never enrolled or withdrew from the course without receiving a grade.” But that’s it. If you lose more students in one format, and the students you lose are the weaker students (not an unreasonable assumption), then having the same learning gains doesn’t mean for all students. It means that you’ve biased your sample.

The researchers asked the students to complete a number of tests and questionnaires before beginning the course and again after completing it, and they analyzed and compared the results between the two groups of students. The results revealed no statistical difference in educational outcomes between the two groups of students. In fact, the students in the hybrid course performed slightly better, but not enough to be statistically significant.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com