Paper of the Day: “The camel has two humps” and “Camels and humps: a retraction”
by Saeed Dehnadi and Richard Bornat

2017-12-04

This week is Computer Science Education Week! It’s an initiative I watch each year with a mix of admiration and dread; I’m dead convinced that our society needs to expose more students to computer science, not fewer; that programming is getting more accessible and relevant, not less; that the only way to build diversity of experience and perspective downstream in industry and academia is to construct education programs that appeal to diverse populations upstream.

That said, I’m not sure that Barack Obama or a movie star spending an hour writing a Javascript “hello world” program and getting an “I learned code” sticker does much to move that needle. There’s a fundamental tension: we want computer science to be accessible, but it is fundamentally hard: hard like algebra is hard. Everyone takes algebra, and as a society we see the value in teaching every student the power of abstracting from concrete arithmetic to symbolic manipulation. But we don’t, generally, construct flashy multimedia initiatives to get kids to write “y = x + 2” on a sticky note and pretend that’s all there is to it.

Some programmers and educators shrug at the whole idea and submit that perhaps some students just can’t learn to program. Today’s POTD is perhaps the most famous study in this vein. The premise was this: the researchers had found a simple pre-test that reliably predicted whether a student would succeed in their introductory programming class.

The test consisted of questions that looked like this:

// After this short program snippet, what are the new values of `a` and `b`?
int a = 10;
int b = 20;

a = b;

/*
 *  1) a = 30 and b =  0
 *  2) a = 30 and b = 20
 *  3) a = 20 and b =  0
 *  4) a = 20 and b = 20
 *  5) a = 10 and b = 10
 *  6) a = 10 and b = 20
 *  7) a = 20 and b = 10
 */

The researchers found that students who had a consistent mental model (it didn’t have to match what Java actually did) were more likely to be successful in the course (although the histograms don’t tell quite as encouraging a story for the “consistent” group, nor quite as discouraging for the “inconsistent” group, as the paper implies).

The paper’s interesting (and dripping with snark) and worth reading, but: it was retracted by one of the authors in 2014! The retraction is our bonus POTD for today, and worth reading on its own.

Just to be clear: I do not believe that Dehnadi discovered an aptitude test for programming, as I claimed in 2006. Nor do I believe in programming sheep and non-programming goats.

Here’s my opinion, in short: it’s not that easy. We don’t get to just write off half the population (of incoming computer science students, no less!) as “incapable of learning to program”.

There’s no “programming gene”; I believe that if you can learn to solve an equation, or change a tire, you can learn the fundamentals of programming. Some people may learn the basics of “programming as a craft” and choose to stop there, able to solve small real-world problems. Others will be drawn further into the theory of computer science and choose to delve deeper. Others still won’t find any interest in the topic at all, and choose not to pursue it.

So, all that said, why’s it still so darn hard to get students to choose to learn at least the basics? Is the topic just that uninteresting? My theory: yes, it is. At least the way we’ve chosen to teach it.

More on that when I write about tomorrow’s Paper of the Day!


This blog will never track you, collect your data, or serve ads. If you'd like to support the blog, consider our tip jars.

DOGE: DPZnAPuvYAfCfHZaBjnEkZkdBn4trJbKUK

BTC: bc1qjtdl3zdjfelgpru2y3epa56v5l42c6g9qylmjx

ETH: 0xAf93BaFF53b58D2CA4D2b0F3F5ec79339f6b59F7