Saturday 30 March 2013

CERIAS: Some thoughts on “cybersecurity” professionalization and education

Relates to the issue of when an employee needs college, and when they don’t. For Cyber security, they do. Relates to the growing needs in cyber security in the UK and in the US. Too many (current) educational programs stress only the technology — and many others include significant technology training components — because of pressure by outside entities, rather than a full spectrum of education and skills. We have a real shortage of people who have any significant insight into the scope of application of policy, management, law, economics, psychology and the like to cyber security  although arguably, those are some of the problems most obvious to those who have the long view. (BTW, that is why CERIAS was founded 15 years including faculty in nearly 20 academic departments: “ cyber security” is not solely a technology issue; this has been recognized by several other universities that are treating it more holistically.) These other skill areas often require deeper education and repetition of exercises involving abstract thought. It seems that not as many people are naturally capable of mastering these skills. The primary means we use to designate mastery is through post secondary degrees, although their exact meaning does vary based on the granting institution.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Free Early-Career Learning Sciences Workshop at CMU LearnLab


Call for Participation 2nd Annual Learning Science Workshop Research and Innovation for Enhancing Achievement and Equity http://www.learnlab.org/opportunities/summerworkshop.php June 22-23 Carnegie Mellon University Pittsburgh PA Applications Due May 5, 2013

*No Cost To Attend*

Overview

Learn Lab, an NSF Science of Learning Center (SLC) at Carnegie Mellon and the University of Pittsburgh, has an exciting summer research opportunity available to early career researchers in the fields of psychology, education, computer science, human-computer interfaces and language technologies. The workshop is targeted to senior graduate students, post-docs and early career faculty. The workshop seeks broad participation, including members of underrepresented groups as defined by NSF (African American, Hispanic, Native American) who may be considering a research or faculty position in the learning sciences. This two-day workshop immediately precedes the Learn Lab Summer School (www.learnlab.org/opportunities/summer/). Our research theme is there search and innovation for enhancing achievement and equity, including these five areas:

* Enhancing Achievement through Educational Technology and Data Mining. Using domain modeling, and large data sets to discover when learning occurs and to provide scaffolding for struggling students. Seehttp://www.learnlab.org/research/wiki/index.php/Computational_Modeling_and_Data_Mining.

* 21st Century Skills, Dispositions, and Opportunities. Re-examining the goals of education and assessment and considering transformative changes in how and where learning occurs.

* Opening Classroom Discourse. Studying how classroom talk contributes to domain learning and supports equity of learning opportunity. See LearnLab’s Social-Communicative Factors thrustwww.learnlab.org/research/wiki/index.php/Social_and_Communicative_Factors_in_Learning.

* Course-Situated Research. Running principle-testing experiments while navigating the complex waters of real-world classrooms. Seewww.learnlab.org/research/wiki/index.php/In_vivo_experiment.

* Motivation Interventions for Learning. Implementing theory based motivational interventions to target at risk populations to improve robust student learning. Seehttp://www.learnlab.org/research/wiki/index.php/Metacognition_and_Motivation

The substantive focus of the workshop is the use of current research and innovations to enhance achievement and equity at all levels of learning. Activities will include demonstrations of the diverse set of ongoing learning sciences research projects at LearnLab, and poster presentations or talks by participants. Participants will also meet with LearnLab faculty in research groups and various informal settings. We will provide information about becoming a part of the Carnegie Mellon or University of Pittsburgh learning science community. In addition to these substantive themes, the workshop will provide participants with opportunities for professional development and the chance to gain a better understanding of the academic career ladder. These include mentoring that focuses on skills, strategies and “insider information” for career paths. Sessions will include keynote speakers and LearnLab senior faculty discussing professional development topics of interest to the attendees. These may include the tenure and promotion process, launching a research program, professionalism, proposal writing, among other topics. There is no cost to attend this workshop

We are very pleased to announce that the workshop will have two distinguished keynote speakers: Nora S. Newcombe, Ph.D. is the James H. Glackin Distinguished Faculty Fellow and Professor of Psychology at Temple University. Dr. Newcombe is the PI of the Spatial Intelligence and Learning Center (SILC), headquartered at Temple and involving Northwestern, the University of Chicago and the University of Pennsylvania as primary partners. Dr. Newcombe was educated at Antioch College, where she graduated with a major in psychology in 1972; and at Harvard University, where she received her Ph.D. in Psychology and Social Relations in 1976. She taught previously at Penn State University.A nationally recognized expert on cognitive development, Dr. Newcombe’s research has focused on spatial development and the development of episodic and autobiographical memory. Her work has been federally funded by NICHD and the National Science Foundation for over 30 years. She is the author of numerous scholarly chapters and articles on aspects of cognitive development, and the author or editor of five books, including Making Space: The Development of Spatial Representation and Reasoning (with Janellen Huttenlocher) published by the MIT Press in 2000.

Tammy Clegg, Ph.D. is an assistant professor in the College of Education with a joint appointment in the College of Information Studies at the University of Maryland. She received her PhD in Computer Science at Georgia Tech in 2010 and her Bachelor of Science in Computer Science from North Carolina State University in 2002. From 2010-2012 Tamara was a postdoctoral fellow at the University of Maryland with the Computing Innovations Fellows program. Her work focuses on developing technology to support life-relevant learning environments where children engage in science in the context of achieving goals relevant to their lives. Kitchen Chemistry is the first life-relevant learning environment she designed along with colleagues at Georgia Tech. In Kitchen Chemistry, middle-school children learn and use science inquiry to make and perfect dishes. Clegg uses participatory design with children to design these new technologies. Her work currently includes creating new life-relevant learning environments (e.g., Sports Physics, Backyard Biology) to understand how identity development happens across these environments. From this analysis, she aims to draw out design guidelines for life-relevant learning activities and technology in various contexts (e.g., sports, gardening).

About LearnLab

LearnLab is funded by the National Science Foundation (award number SBE-0836012). Our center leverages cognitive theory and computational modeling to identify the instructional conditions that cause robust student learning. Our researchers study robust learning by conducting in vivo experiments in math, science and language courses. We also support collaborative primary and secondary analysis of learning data through our open data repository LearnLab DataShop, which provides data import and export features as well as advanced visualization, statistical, and data mining tools. To learn more about our cognitive science theoretical framework, read our Knowledge-Learning-Instruction Framework. The results of our research are collected in our theoretical wiki which currently has over 400 pages. It also includes a list of principles of learning which are supported by learning science research. The wiki is open and freely editable, and we invite you to learn more and contribute.

Application Process

Applicants should email their CV, this demographic form, a proposed presentation title and abstract, and a brief statement describing their research interests to Jo Bodnar (jobodnar@cs.cmu.edu) by May 5, 2013. Please use the subject Application for LearnLab Summer Workshop 2013. Upon acceptance, we will let you know if you have been selected for a talk or poster presentation.

Costs

There is no registration fee for this workshop. However, attendance is limited so early applications are encouraged. Scholarships for travel are available. Scholarships will be awarded based on your application, including your research interests, future plans, and optional recommendation letter.


Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Cybersecurity as a motivation for drawing high schoolers into CS

We’ve talked about the UK and the US worrying about having enough cyberwarriors to deal with future cybersecurity issues. CMU is helping to build a game to entice high school students into computing, with cybersecurity as the focus.Carnegie Mellon University and one of the government’s top spy agencies want to interest high school students in a game of computer hacking.

Their goal with “Toaster Wars” is to cultivate the nation’s next generation of cyber warriors in offensive and defensive strategies. The free, online “high school hacking competition” is scheduled to run from April 26 to May 6, and any U.S. student or team in grades six through 12 can apply and participate.David Brumley, professor of computer science at Carnegie Mellon, said the game is designed to be fun and challenging, but he hopes participants come to see computer security as an excellent career choice.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Feds give nudge to competency-based education: Beyond the Credit Hour

Of all the open learning movement initiatives, this may be the most important. The credit hour is a poor measure of learning-attained. It’s too large a grain size to be important as a measure of instruction. Moving to competencies (whatever that may end up being) is a move in the right direction, in terms of facilitating our ability to measure the amount of learning and the amount of teaching effort involved in an education program.
The U.S. Department of Education has endorsed competency-based education with the release today of a letter that encourages interested colleges to seek federal approval for degree programs that do not rely on the credit hour to measure student learning.

Department officials also said Monday that they will give a green light soon to Southern New Hampshire University’s College for America, which would be the first to attempt the “direct assessment” of learning – meaning no link to the credit hour – and also be eligible for participation in federal financial aid programs.via Feds give nudge to competency-based education | Inside Higher Ed.
About these ads

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Guided Computer Science Inquiry in Data Structures Class

Inquiry-based learning is the best practice for science education. Education activities focus on a driving question that is personally meaningful for students, like “Why is the sky blue?” or “Why is the stream by our school so acidic (or basic)?” or “What’s involved in building a house powered entirely by solar power?” Answering those questions leads to deeper learning about science. Learning sciences results support the value of this approach.It’s hard for us to apply this idea from science education and teach an introductory computing course via inquiry, because students may not have many questions that relate to computer science when they first get started. Questions like “How do I make an app to do X?” or “How do I use Snap on my laptop?” are design and task oriented, not inquiry oriented. Answering them may not lead to deeper understanding of computer science. Our everyday experience of computing, through (hopefully) well-designed interfaces, hides away the underlying computing. We only really start to think about computing at moments of breakdown (what Heidegger called “present-at-hand”). ”Why can’t I get to YouTube, even though the cable modem light is on?” and “How does a virus get on my computer, and how can it pop up windows on my screen?” It’s an interesting research project to explore what questions students have about computing when they enter our classes.

I realized this semester that I could prompt students to define questions for inquiry-based learning in a second computer science class, a data structures course. I’m teaching our Media Computation Data Structures course this semester. These students have seen under the covers and know that computing technology is programmed. I can use that to prompt them about how new things work. What I particularly like about this approach is how it gets me out of the “Tour of the Code” lecturing style.Here’s an example. We had already created music using linked lists of MIDI phrases. I then showed them code for creating a linked list of images, then presented this output.


I asked students, “What do you want to know about how this worked?” This was the gamble for me — would they come up with questions? They did, and they were great questions. ”Why are the images lined up along the bottom?” “Why can we see the background image?”I formed the students into small groups, and assigned them one of the questions that the students had generated. I gave them 10 minutes to find the answers, and then report back. The discussion around the room was on-topic and had the students exploring the code in depth. We then went through each group to get their answers. Not every answer was great, but I could take the answer and expand upon it to reach the issues that I wanted to make sure that we highlighted. It was great — way better and more interactive than me paging through umpteen PowerPoint slides of code.Then I showed them this output from another linked list of images.


Again, the questions that the students generated were terrific. ”What data are stored in each instance such that some have positions and some are just stacked up on the bottom?” and “Why are there gaps along the bottom?”Still later in the course, I showed them an animation, rendered from a scene graph, and I showed them the code that created the scene graph and generated the animation. Now, I asked them about both the animation code and the class hierarchy that the scene graph nodes was drawing upon. Their questions were both about the code, and about the engineering of the code — why was it decomposed in just this way?


(We didn't finish answering these questions in a single class period, so I took pictures of the questions so that I could display them and we could return to them in the next class.)
I have really enjoyed these class sessions. I’m not lecturing about data structures — they’re learning about data structures. The students are really engaged in trying to figure out, “How does that work like that?” I’m busy in class suggesting where they should look in the code to get their questions answered. We jointly try to make sense of their questions and their answers. Frankly, I hope to never again have to show sequences of PowerPoint slides of code ever again.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Sunday 24 March 2013

Fostering Gender Diversity in Computing: March Issue of IEEE Computer

The March issue of IEEE Computer is going to be devoted to fostering gender diversity in computing. It looks like it’s going to be a great issue, including a piece by my school chair, Annie Anton.Why is this important to us? Computing and information technology are among the fastest growing U.S. industries: technical innovation plays a critical role in every sector of the U.S. and global economy, and computing ranks among the top 10 high-profile professions. However, as a nation, we are not prepared to attract and retain the professional workforce required to meet future needs. By 2018, US universities will produce only 52 percent of the computer science bachelor’s degrees needed to fill the 1.4 million available jobs.

A lack of diverse perspectives will inhibit innovation, productivity, and competitiveness. In addition to failing to attract new and diverse talent, industry is also losing trained professionals who are already interested in technology. While 74 percent of professional women report “loving their work,” 56 percent leave at the career “midlevel” point just when their loss is most costly to the company—this is more than double the quit rate for men.via Fostering Gender Diversity in Computing.


Deepa SinghBusiness DeveloperWeb Site:-http://www.gyapti.comBlog:- http://gyapti.blogspot.comEmail Id:-deepa.singh@soarlogic.com

Thy Employee is Not You: New Study Exposes Gender Bias in Tech Job Listings

I found the study linked below fascinating, in part because I saw myself making exactly these mistakes. I have absolutely described jobs in those masculine terms instead of the more neutral terms. I didn’t realize that those were terms that would dissuade females from applying. When we teach classes on designing user interfaces, a key idea that we want students to learn is that “Thy User is Not You.” Don’t design for yourself. Don’t judge the interface only from your own eyes. You can’t imagine how the user is really going to use your interface. Try it with real users. Get input from real users. You can’t design interfaces for yourself and expect them to be usable for others. (Just like you can’t develop educational software for the developed world and expect it to work in the developing world.)

I heard the same lesson in this study. If you want to hire employees different than you, find out what you need to put in your job ad to attract them. You do not know how they will read your ad. Get input from others (who see things differently than you), and use expert guidance. Thy employee is not you. The paper — which details a series of five studies conducted by researchers at the University of Waterloo and Duke University — found that job listings for positions in engineering and other male-dominated professions used more masculine words, such as “leader,” “competitive” and “dominant.” Listings for jobs in female-dominated professions — such as office administration and human resources — did not include such words.

A listing that seeks someone who can “analyze markets to determineappropriate selling prices,” the paper says, may attract more men than a list that seeks someone who can “understand markets to establish appropriate selling prices.” The difference may seem small, but according to the paper, it could be enough to tilt the balance. The paper found that the mere presence of “masculine words” in job listings made women less interested in applying — even if they thought they were qualified for the position.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Friday 22 March 2013

We Need an Economic Study on Lost Productivity from Poor Computing Education

How much does it cost the American economy that most American workers are not computer literate? How much would be saved if all students were taught computer science? These questions occurred to me when trying to explain why we need ubiquitous computing education. I am not an economist, so I do not know how to measure the costs of lost productivity. I imagine that the methods would be similar to those used in measuring the Productivity Paradox.

We do have evidence that there are costs associated with people not understanding computing:
  • We know from Scaffidi, Shaw, and Myers that there are a great many end-user programmers in the United States. Brian Dorn’s research on graphic designersidentified examples of lost productivity because of self-taught programming knowledge. Brian’s participants did useless Google searches like “javascript <variablename>” because they didn’t know which variable or function names were meaningful and which were arbitrary. Brian saw one participant spend a half an hour studying a Web resource on Java, before Brian pointed out that he was programming in Javascript which was a different language. I bet that many end-users flail like this — what’s the cost of that exploration time?
  • Erika Poole documented participants failing at simple tasks (like editing Wikipedia pages) because they didn’t understand basic computing ideas like IP addresses. Her participants gave up on tasks and rebooted their computer, because they were afraid that someone would record their IP address. How much time is lost because users take action out of ignorance of basic computing concepts?
We typically argue for “Computing for All” as part of a jobs argument. That’s what Code.org is arguing, when they talk about the huge gap between those who are majoring in computing and the vast number of jobs that need people who know computing. It’s part of the Computing in the Core argument, too. It’s a good argument, and a strong case, but it’s missing a bigger issue. Everyday people need computing knowledge, even if they are not professional software developers. What is the cost for not having that knowledge?

Now, I expect Mike Byrne (and other readers who push back in interesting ways on my “Computing for Everyone” shtick) to point out that people also need to know about probability and statistics (for example), and there may be a greater cost for not understanding those topics. I agree, but I am even harder pressed to imagine how to measure that. One uses knowledge of probability and statistics all the time (e.g., when deciding whether to bring your umbrella to work, and whether you can go another 10K miles on your current tires). How do you identify (a) all the times you need that knowledge and (b) all the times you make a bad prediction because you don’t have the right knowledge? There is also a question of whether having the knowledge would change your decision-making, or whether you would still bepredictably irrational. Can I teach you probability and statistics in such a way that it can influence your everyday decision making? Will you transfer that knowledge? I’m pretty sure that once you know IP addresses and that Java is not the same as JavaScript, you won’t forget those definitions — you don’t need far-transfer for that to be useful. While it is a bit of a “drunk under the streetlight” argument, I can characterize the behaviors where computing knowledge would be useful and when there are costs for not having that knowledge, as in Brian and Erika’s work. I am trying to address problems that I have some idea of how to address.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Worst practice in providing educational technology, especially to developing world

I followed an insightful chain of blog articles to this one. I started with Larry Cuban’s excellent piece about “No End to Magical Thinking When It Comes to High-Tech Schooling” which cited the quote below, but first when through a really terrific analysis of the explanations that educational technology researchers sometimes make when hardware in dumped in the developing world fails to have a measurable impact. I highly recommend the whole sequence for a deeper understanding of what real educational reform looks like and where technology can play a role.

1. Dump hardware in schools, hope for magic to happen This is, in many cases, the classic example of worst practice in ICT use in education. Unfortunately, it shows no sign of disappearing soon, and is the precursor in many ways to the other worst practices on this list. “If we supply it they will learn”: Maybe in some cases this is true, for a very small minority of exceptional students and teachers, but this simplistic approach is often at the root of failure of many educational technology initiatives.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Thursday 21 March 2013

Why the MOOCopalypse is Unlikely

The article from The Chronicle referenced below helped convince me that the MOOCopalypse is unlikely to happen. The MOOC opalypseis the closing of most of American universities (“over half” said one of our campus leaders recently) because of MOOCs. The Chronicle piece is about the professors currently offering MOOCs, and the survey (at left) is only with MOOC providers.The first and greatest challenge to the MOOC opalypse is economic. It’s a huge cost to produce MOOCs — not just on the professors making the MOOCs, but on all their colleagues who have to cover the teaching and service that the MOOC-makers aren't providing. For what benefit? Most of the MOOC professors talk about the huge impact, about a “one to two to three magnitudes” greater impact. Not clear to me how universities can take that to the bank. Unlike fame from a great result or influential paper, MOOC fame doesn't obviously lead to greater funding opportunities.

There is currently no revenue from MOOCs. It is not reducing the number of students who need to be taught, nor the amount of service needed to run the place. It may be reducing the amount of research (and research funding) that the MOOC providers may have provided. MOOC professors who see that MOOCs may reduce the costs to students are consequently predicting fewer tuition dollars flowing into their institutions. Literally, I do not see that the benefits of MOOCs outweigh their costs.In all, the extra work took a toll. Most respondents said teaching a MOOC distracted them from their normal on-campus duties.“I had almost no time for anything else,” said Geoffrey Hinton, a professor of computer science at the University of Toronto.“My graduate students suffered as a consequence,” he continued. “It’s equivalent to volunteering to supply a textbook for free and to provide one chapter of camera-ready copy every week without fail.”via The Professors Behind the MOOC Hype – Technology – The Chronicle of Higher Education.The second reason why the MOOC opalypse is unlikely is because those predicting the closing of community colleges and state universities do not understand the ecology of these institutions and how they are woven into the fabric of their communities.
  • This year, I chair the computing and information system technologies (CIST) advisory board of local Chattahoochee Technical College. Most of the advisory board draws on local industry, the people who hire CTC’s graduates. They have a say in what gets taught, by describing what they need. How do you replicate that interchange with MOOCs?
  • I have had the opportunity to visit several institutions in the University System of Georgia through “Georgia Computes!” At Albany State University, they teach the standard computing courses, but the languages and tools they use are drawn from ones that the local industry needs. At Columbus State University, they teach content that local Fort Binning needs for the military personnel and employees. Courses are set up to meet the logistical needs of the military at Fort Binning. Why would the MOOC provider-professors at Stanford, MIT, Harvard, or Toronto want to meet any of those needs?
My third reason why I believe the MOOC opalypse is unlikely is based on a prediction about the technology. I do not believe that MOOCs are going to dramatically increase their completion rates (even with degree options and accreditation schemes like Accredible.com) ,and I do not believe that MOOCs will be successful in teaching the majority of students. Founders of higher education (e.g., parents and legislators) and consumers of higher education products (e.g., employers) are not going accept the closing of state universities in favor of an option that fewer students graduate from and that produces weaker graduates. We are already hearing the push back against the plans to move community college courses into MOOCs in The Chronicle. I can believe that some universities may close, but I cannot believe that we as a nation would willingly embrace the closing of a not-great but underfunded educational system for a markedly worse one.I’m reminded of the A Nation at Risk report and the claim ”If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war.” That report was about primary and secondary school education. The MOOC opalypse would be an act of war on higher education.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Living with MOOCs: Surviving the Long Open Learning Winter

One of the positive benefits of MOOCs is that a lot of faculty and administration are exploring educational innovations with technology. When teachers explore how to facilitate learning, improved teaching and learning is likely to result. One of theproblems is that many of these teachers and administrators are deciding that MOOCs and other open learning resources are the best bets for addressing educational problems. They are buying into the belief that open learning is the best that there is (or, perhaps, the only thing that they found) and into the associated beliefs (e.g., that existing educational systems are ineffective and unsustainable, that “everyone already knows that a college degree means next to nothing“). Those of us who do educational technology research and don’t do MOOCs are likely in for a stretch where our work will be under-appreciated, or simply ignored. The AI community talks about their “AI Winter.” Let’s call this the Open Learning Winter.

Regular readers of this blog (and I’m grateful that you are here!) know that I’ve been doing a good bit of traveling the last few months. From MIT and Stanford, to Indiana and SIGCSE, I’ve had the opportunity to hear lots of people talk about the educational innovations that they are exploring, why they have decided on MOOCs and other open learning resources, and what they think about those of us who are not building MOOCs. The below are paraphrased snippets of some of these conversations (i.e., some of the parts of these quotes are literally cut-and-paste from email/notes, while other parts are me condensing the conversation into a single quote representing what I heard):
  • “You do eBooks  Don’t you know about Connexions? Why not just do Connexions books? Do you think that student interactivity with the ebook reallymatters?”
  • ”You’re making ebooks instead of MOOCs? That’s really interesting. Are you building a delivery platform now? One that can scale to 100K students this Fall?” As if that’s the only thing that counts — when no one even considered that scale desirable even a couple years ago.
  • “Ebooks will never work for learning. You can’t ask them to read. Students only want video.”
  • “Anchored Collaboration sounds interesting. Can I do it with Piazza? No? Then it’s not really useful to anyone, is it?”
  • “Why should we want to provide resources to state universities? Don’t you know that all of their programs are going to die?”
  • NSF Program officer at CCC MROE Workshop, “We better figure out online education. All the state universities are going to close soon.”
These attitudes are not going to change quickly. People are investing in MOOCs and other open learning resources. While I do not believe that the MOOC Apocalypse will happen, people who do believe in it are making investments based on that belief. The MOOC-believers (perhaps MOOC Apocalypse survivalists?) are going to want to see their investments will pan out and will keep pursuing that agenda, in part due to the driving power of “sunk costs” (described in this well done Freakonomics podcast). That’s normal and reasonable, but it means that it will be a long time before some faculty and administrators start asking, “Is there anything other than MOOCs out there?” 

I think MOOCs are a fascinating technology with great potential. I do not invest my time developing MOOCs because I believe that the opportunity cost is too high. I have had three opportunities to build a MOOC, and each time, I have decided that the work that I would be giving up is more valuable to me than the MOOC I would be producing. I do not see MOOCs addressing my interests in high school teachers learning CS, or in end-users who are learning programming to use in their work, or in making CS more diverse. It may be that universities will be replaced by online learning, but I don’t think that they’ll all look like MOOCs. I’m working on some of those non-MOOC options.Researchers like me, who do educational technology but don’t do MOOCs, need to get ready to hunker down. Research funding may become more scarce since there are MOOCopalypse survivalists at NSF and other funding agencies. University administrators are going to be promoting and focusing attention on their pet MOOC projects, not on the non-believers who are doing something else. (Because we should realize that there won’t be anything else!) There will probably be fewer graduate students working in non-MOOC areas of educational technology. Most of the potential PhD students who contacted me during this last application cycle were clear about how important MOOCs were to them and the research that they wanted to do.We need to learn to live with MOOCs, even if we don’t do MOOCs. Here are a couple of the hunkering down strategies I've been developing:
  • While I don’t want to spend the time to build a MOOC, I am interested in being involved in analysis of MOOC data. It’s not clear how much data Coursers or Audacity will ever release (and why isn't edX releasing data — they’re a non-profit!), but I see a great value in understanding MOOCs. We might also learn lessons that can be applied in other areas of educational innovation with technology.
  • My colleagues involved in MOOCs at Georgia Tech have told me that we have the rights to re-use GT MOOC materials (e.g., all the video that has been collected). That might be a source of interesting materials for my research. For example, my colleague Jim Foley suggested that I might re-purpose video from a MOOC to create an eBook on the same content that might be usefully contrasted in a study.
I can’t predict just how long the Open Learning Winter might be. Given the height of the hype curve associated with MOOCs and the depth of the pockets of the early adopters, I suspect that it’s going to be quite a long, cold winter. Make sure that you have lots of jerky on-hand — and hope that it’s just winter and not an Ice Age.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Tuesday 19 March 2013

True success of ‘robotics revolution’ hinges on training and education

I buy this argument, and it’s more subtle than the recent 60 Minutes piece. Does the influx of robotics lead to more or fewer jobs? 60 Minutes says fewer jobs. In contrast, Henrik Christensen says more jobs. The difference is education. There are fewer lower-education jobs, but more higher-education jobs. So unless you ramp up education, it is fewer jobs.

That’s not to say the transition to this brave new world of robotics will be painless. Short-term upheaval is inevitable. For Exhibit A, look at the jobless recovery we find ourselves in today: Increased productivity has driven economic growth, yet unemployment rates remain stubbornly high. But most insiders seem to agree that if we look past the short term, the medium- and long-term benefits of the robotics revolution appear to be positive, not just in terms of economic growth but for job creation, too.They also warn that the job creation part will require a keen focus on training and education for those low-skilled workers who get squeezed out of their jobs by robotics. Collectively, we ignore this warning at our own peril.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Saturday 16 March 2013

Percent of women graduates BS in CS: National, UW, GT


In the context of David Notkin’s receipt of the 2013 Computing Research Association A. Nico Habermann Award for outstanding contributions to supporting underrepresented groups in the computing research community, Lecia Barker of the National Center for Women & Information Technology (we hosted their Washington State Awards for Aspirations in Computing last weekend) sent us the chart to the right, comparing UW CSE’s performance to the national average in granting bachelors degrees to women.via UW CSE News » Women in Computer Science: UW CSE is a pacesetter.

It was really great to see these results in the U. Washington CSE News, but it got me to wondering: Did all the big R1 institutions rise like this, or was this unusual at UW? I decided to generate the GT data, too.I went to the GT Self-Service Institutional Research page and downloaded the degrees granted by college and gender in each of 2005, 2006, and on up to 2011. (All separate spreadsheets.) I added up Fall, Spring, and Summer graduates for each year, and computed the female percentage. Here’s all three data sets graphed. While GT hasn’t risen as dramatically as UW in the last two years (so UW really has done something remarkable!), but GT’s rise from 2005 far below the national average to above the national average in 2009 is quite interesting.

Why is UW having such great results? Ed Lazowska claimed at SIGCSE 2013 that it’s because they have only a single course sequence (“one course does fit all,” he insisted) and because they have a large number of female TAs. I don’t believe that. I predict that more courses would attract more students (see the “alternative paths” recommendation from Margolis and Fisher), and that female TA’s support retention, not recruitment. I suspect that UW’s better results have more to do with the fact that GT’s students declare their major on their application form, while UW students have to apply to enter the CSE program. Thus, (a) UW has the chance to attract students on-campus and (b) they have more applications than slots, so they can tune their acceptances to get the demographics that they value.

Percentage of females in BS CS graduates, by year, nationally, for U. Washington, and for Georgia Tech.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Georgia proposes reducing CS in high school curriculum

Georgia’s Department of Education is revising their curricula for computer science. You can see the existing pathway definition for “Computing” (here), and the definition of the existing first course “Computing in the Modern World” (CiMW). CiMW is based on the CSTA Standards, and includes computing topics like data representation, Moore’s Law, algorithmic thinking, and problem solving. The proposed new first course is linked here, as part of the now-called “Information Technology” Pathway. It’s called “Introduction to Digital Technology.” It does include computational thinking, but removes most of the computer science pieces.

Why are they doing this? We are not sure — Universities have not been involved in the revision, only high school teachers and industry folks. One theory is that the Department of Education wants to better align high school courses with jobs, so that high school students can graduate and go into the IT industry (perhaps same goal in NYC?).

I suspect that another reason for the change is the challenge of teaching teachers about CiMW topics. Teachers can’t teach everything in CiMW because (I suspect) many of them teaching the course don’t all know the content yet. Some of the high school teachers involved in the redesign told us that they were asked to use fewer computing buzzwords, because the teachers don’t know all those terms. The teachers in this pathway are Business teachers, often with little STEM background. Professional development budgets in Georgia have been slashed since 2007 when the Computing Pathways was launched. It’s disappointing (if I’m right) that the decision is to reduce the scope of the curriculum, instead of helping the teachers to learn. The new course is open for public comment (here). If you are interested, please consider leaving your comments on the changes in the questionnaire.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Computational Thinking in K–12: A report in Ed Researcher

Shuchi Grover and Roy Pea (Stanford) have a review of the field of computational thinking in K-12 schools in this month’s Educational Researcher. It’s a very nice paper. I’m excited that the paper is published where it is! Educational Researcheris the main publication venue for the largest education research organization in the United States (American Educational Research Association). Roy has been doing work in computing education for a very long time (e.g., “On the prerequisites of learning computer programming,” 1983, Pea and Kurland). This is computational thinking hitting the education mainstream.

Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Summer Camps in Georgia: Roll-up Report and Invitation to Play with Data (SECOND TRY)

I had posted this blog piece back in January, but then was asked to take it down. There were concerns that the data were not anonymized enough to guarantee participant anonymity. Tom McKlin did a great job of working with the Human Subjects Review board here at Georgia Tech, to figure out a set of data that would be useful to other computing education researchers, but would guarantee participant anonymity (to the extent feasible). Here’s our newly approved data set.

Our external evaluators (The Findings Group) has just produced the roll-up analysis for all the GaComputes related summer camps from Summer 2012. These include camps offered at Georgia Tech, and those offere elsewhere in the state, started by GaComputes seed grants (as described in the 2011 SIGCSE paper that I blogged about). The results are strong:
  • Over 1,000 K-12 students participated statewide.
  • The camps were even more effective with women than men.
  • There was a statistically significant improvement in content knowledge for Scratch, Alice, and App Inventor, across genders, ethnic groups, and grade levels.
  • “The computing camps were particularly effective at increasing students’ intent to pursue additional computing, self‐efficacy in doing computing, and sense of belonging in computing.”
  • “Minority students reported significantly more growth in their intent to persist in computing than majority students.”
The Findings Group had a particularly interesting proposal for the Computing Education Research community. They are making all the survey data from all the camps freely available, in an anonymous form. They have a sense that there is more to learn from these data. It’s a lot of students, and there’s a lot to explore there in terms of motivation, engagement, and learning. If you play with these data, do let us know what you learn!


Deepa Singh
Business Developer
Web Site:-http://www.gyapti.com
Blog:- http://gyapti.blogspot.com
Email Id:-deepa.singh@soarlogic.com

Computational Thinking in K–12: A report in Ed Researcher

Shuchi Grover and Roy Pea (Stanford) have a review of the field of computational thinking in K-12 schools in this month’s Educational Researcher. It’s a very nice paper. I’m excited that the paper is published where it is! Educational Researcheris the main publication venue for the largest education research organization in the United States (American Educational Research Association). Roy has been doing work in computing education for a very long time (e.g., “On the prerequisites of learning computer programming,” 1983, Pea and Kurland). This is computational thinking hitting the education mainstream.

Jeannette Wing’s influential article on computational thinking 6 years ago argued for adding this new competency to every child’s analytical ability as a vital ingredient of science, technology, engineering, and mathematics (STEM) learning. What is computational thinking? Why did this article resonate with so many and serve as a rallying cry for educators, education researchers, and policy makers? How have they interpreted Wing’s definition, and what advances have been made since Wing’s article was published? This article frames the current state of discourse on computational thinking in K–12 education by examining mostly recently published academic literature that uses Wing’s article as a springboard, identifies gaps in research, and articulates priorities for future inquiries.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Friday 15 March 2013

IT Research Hearing Focuses on Security and Computing Education

A congressional committee heard about the importance of computing research, and what the committee members responded with was a need for more cyber-security and more computing education.Lazowska spoke about the NITRD program’s history and the role of computing in the US economy. He showed an NRC chart on research and IT sectors with billion dollar markets. Lazowska also talked about the need to integrate security into the building of systems and not added on at the end as a defensive measure when questioned about cybersecurity by Congressman Steven Stockman R-TX. Stockman, who credits support from the fiscally-conservative Tea Party for his election, had the quote of the hearing, when after having pressed Lazowska for an order-of-magnitude estimate on how much additional investment in fundamental cyber security research would move the needle seemed surprised that the number PITAC requested back in 2005 was “only” $90 million. “Well, I’m interested in getting you billions, not millions,” he said, indicating he was very concerned about the U.S. vulnerability to cyber attack.The Subcommittee members were very interested in how to tackle the education problem in computing as well as how they could help researchers address cybersecurity moving forward.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

CSAB and ABET/CAC Criteria Committee Survey

At the ACM Education Council meeting this last weekend, I heard about changes in the accreditation criteria being considered for computing disciplines (e.g., Computer Science, Information Systems, Information Technology). The committee has asked for feedback on several issues that they’re considering, e.g., how much mathematics do students really need in computing? That question, in particular, is one that I’m reading about in The Computer Boys Take Over by Nathan Ensmenger. Ensmenger tells the story of how mathematics got associated with preparation of programmers (not computer scientists). Mathematics showed up on the early aptitude tests that industry created as a way of figuring out who might be a good programmer. But Ensmenger points out that mathematic ability only correlated with performance in academic courses, and did not correlated with performance as a programmer. It’s not really clear how much math is really useful (let alone necessary) for being a programming. Mathematics got associated with programming decades ago, and it remains there today.

The Committee is inviting feedback on this and other issues that they’re considering.This survey was developed by a joint committee from CSAB and the ABET Computing Accreditation Commission, and is designed to obtain feedback on potential changes on the ABET Computing Accreditation Criteria. We are looking for opinions about some of the existing ideas under discussion for change, as well as other input regarding opportunities to improve the existing criteria.Respondents to the survey may be computing education stakeholders in any computing sub discipline, including computer science, information systems, information technology, and many others. Stakeholders may include professionals in the discipline, educators, and/or employers of graduates from computing degree programs.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

First Workshop on AI-Supported Education for Computer Science

Shared by Leigh Ann Sudol-DeLyser (Visiting Scholar, New York University) with the SIGCSE list.Dear SIGCSE-ers! I would like to announce the First Workshop on AI-Supported Education for Computer Science to be held at the Artificial Intelligence in Education conference this summer in Memphis and invite the submission of papers from the SIGCSE community. Please see the website at: https://sites.google.com/site/aiedcs2013/Submissions are due by April 12, 2013.

Workshop Description:

Designing and deploying AI techniques within computer science learning environments presents numerous important challenges. First, computer science focuses largely on problem solving skills in a domain with an infinitely large problem space. Modeling the possible problem solving strategies of experts and novices requires techniques that represent a large and complex solution space and address many types of unique but correct solutions to problems. Additionally, with current approaches to intelligent learning environments for computer science, problems that are provided by AI-supported educational tools are often difficult to generalize to new contexts. The need is great for advances that address these challenging research problems. Finally, there is growing need to support affective and motivational aspects of computer science learning, to address widespread attrition of students from the discipline. Addressing these problems as a research community, AIED researchers are poised to make great strides in building intelligent, highly effective AI-supported learning environments and educational tools for computer science and information technology.

Topics of Interest:
  • Student modeling for computer science learning
  • Adaptation and personalization within computer science learning environments
  • AI-supported tools that support teachers or instructors of computer science
  • Intelligent support for pair programming or collaborative computer science problem solving
  • Automatic question generation or programming problem generation techniques
  • Affective and motivational concerns related to computer science learning
  • Automatic computational artifact analysis or goal/plan recognition to support adaptive feedback or automated assessment
  • Discourse and dialogue research related to classroom, online, collaborative, or one-on-one learning of computer science
  • Online or distributed learning environments for computer science
Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com

Online Learning Outcomes Equivalent to Traditional Methods: But what about the drops?

This is a great result, if I can believe it. They took 605 students, some in a traditional course and some in a “hybrid” course, and did pre/post tests. They found no difference in outcomes. Here’s what I’m not sure about: What happened to those students who failed or who withdrew? Other studies have suggested that online courses have higher withdraw/failure rates. Is that the case here? There is only one footnote (page 18) that mentions withdraw/failure: “(27) Note that the pass rate in Figure 1 and Appendix Table A3 cannot be used to calculate the percentage of students who failed the course because the non-passing group includes students who never enrolled or withdrew from the course without receiving a grade.” But that’s it. If you lose more students in one format, and the students you lose are the weaker students (not an unreasonable assumption), then having the same learning gains doesn’t mean for all students. It means that you’ve biased your sample.

The researchers asked the students to complete a number of tests and questionnaires before beginning the course and again after completing it, and they analyzed and compared the results between the two groups of students. The results revealed no statistical difference in educational outcomes between the two groups of students. In fact, the students in the hybrid course performed slightly better, but not enough to be statistically significant.

Deepa Singh
Business Developer
Email Id:-deepa.singh@soarlogic.com