Wednesday, February 17, 2010

Is Talent Overrated?

I just read a compelling article by Tim van Gelder, Melanie Bissett, and Geoff Cumming, entitled Cultivating Expertise in Informal Reasoning.

It argues that expertise in informal reasoning (argument analysis) can be developed through deliberate practice. Stated this way, it may sound obvious. However, it seems to me that there's a general conception that a person's ability to analyze arguments is purely a function of some sort of innate intelligence.

The article casts doubt on that position by describing the results of a study in which undergraduates' performance on a standardized critical thinking test improved after a semester of practice in argument mapping.

As the article itself notes, this does not conclusively prove the "deliberate practice hypothesis" in the context of informal reasoning. To do that, a larger study would have to be carried out that monitored people over a period of years as they engaged in disciplined practice with a mentor (akin to the way musicians and athletes train).

Even in the absence of this stronger evidence, I find the thesis to be extremely plausible and consistent with my personal experience.

I hope that you find the article inspiring.

Friday, January 1, 2010

Latest tweaks

I made a few new tweaks to the site today. First, I made it so that anyone can submit comments, regardless of whether or not they're logged in. We'll see how that works out...

I also made it much simpler for people to submit bad arguments that they've come across. Instead of having to prepare a complete multiple choice question, answer, and explanation, now they just need to submit the argument itself.

Users who are logged in can view their history - a list of all of the questions that they've answered and how they did on each. I added a column to this table called the Difficulty Index, which reflects how challenging other users found each question to be.

Finally, I eliminated an obsolete redirect from the multiple choice pages, which I hope will speed things up dramatically for mobile users.

Wednesday, December 30, 2009

Comments and user histories

By popular request (actually at the request of a user named Trout), I added a commentary feature that lets users share their thoughts about each argument and multiple choice question.

I believe that there are objective standards by which arguments can be analyzed and judged, and I try to create questions that explore these principles in a crisp and fair manner. However, there's obviously a certain level of subjectivity in the way that any given argument can be interpreted and characterized. By opening up each question for discussion, we can consider these ideas in a more collaborative forum.

I also added a "user history" page for registered users. This lists every question that they've answered, indicates whether they got it correct, and allows them to go back to it and participate in the ongoing discussion around it.

So let the conversations unfold.

Monday, December 28, 2009

Site Updates

I haven't made any structural changes to the site in a few months. Today I was feeling ambitious.

Most noticeably, I tweaked the CSS style sheet to give the site a more elegant appearance.

I also rigged up the quiz so that the original question now appears on the page that gives you your results. This will make it easier to digest answers and learn from mistakes.

Behind the scenes, I changed the way that questions can be authored. Previously when writing a question, I would select the correct answer from a prespecified list. The distractor (incorrect) answers would then be randomly chosen from the same list. Now, I (or anyone who wants to, really) can write free form answers. This requires more creativity, but I think it will open the door to a wider variety of question types. So keep an eye out for those.

I also fixed a pesky log on issue that a kind user brought to my attention.

More to come in 2010...

Wednesday, May 6, 2009

What is this "score" thing?

The user score at Bad Arguments is based on two things: the number of questions you answer correctly and the difficulty of those questions.

Here's (roughly) how it's calculated: Each question has a number reflecting the frequency that it's answered correctly, which we'll call the success percentage. For each question you get right, subtract the success percentage from 1. The result will be your points for that question. (So if you correctly answer a question that only 40% of users get right, you'll get 60 points.) Add up your points for all of the questions, and multiply that by ten. That's your score.

If it seems arbitrary, that's because it is. But it satisfies my three requirements: people who answer harder questions correctly get a higher score; people who answer more questions get a higher score; and there's a metric that can compare everyone. (The percentage of questions that you get right doesn't accurately reflect this, because questions are tailored to your skill level.)

The whole thing gets multiplied by ten for no other reason than that 5120, for example, seems like more of an achievement than 512.

One other thing to note here: Because your score depends upon how well other people do answering the same questions that you did, your score can go up or down even if you don't do anything.

Questions tailored to your skill level

To be perfectly frank, one of my goals is to get users of Bad Arguments addicted to the site. To accomplish this, the site needs to appeal to users with a wide range of experience in evaluating arguments. It should have easy questions for beginners, challenging ones for advanced users, and everything in between. If the questions are too easy or too hard, users will be more likely to leave the site and go back to Facebook (or wherever else they're procrastinating).

My speculation is that if a user is getting more than 30% of the questions wrong, they'll get frustrated and leave. And if they're getting over 90-95% of the questions right, they'll start to get bored. So I threw in some code a few days ago that tries to keep you on that middle path at all times. If your success percentage departs from the 70-90% range, you'll get a question designed to either challenge you or boost your ego, depending on what (I think) you need.

What determines how challenging a question is? It's determined by how often users get that question right. So do.

Thursday, April 30, 2009