I’ve been absolutely flattened by planning for the Web Science conference this week. We have a ton of papers – more than 130 – which means nearly 400 reviews from about 80 prominent scholars, journalists, and writers.

I ought to have foreseen one difficulty that I did not: in a wildly interdisciplinary conference, reviewers will disagree wildly. Every conference seems to have some paper with reviews like this:

  1. Strong Accept (Expert)
  2. Weak Accept (?)
  3. Strong Reject (Expert)

but for Web Science this year, we’ve got these coming out of our ears.

People sometimes want to take an average, or to vote, but that’s a poor solution. Back in the old days, we’d get those two experts at the program committee table, hear them argue, and decide who was convincing. Now, travel is too expensive and we try to make do with EasyChair.

I’m organizing the program in Tinderbox Six. This has its advantages, both for debugging and testing and for checking that performance is adequate. And today was a new first: Tinderbox Six has stayed up through an entire day of challenging work.

This is not to say there are no problems. For example, I’ve got a macro ^do(tbx) that lets me write “Tinderbox” quickly with a proper link. Tinderbox Six autocorrects “tbx” to “tax.” Thanks a lot!