Reviewing
I've spent an enormous amount of time in the past two weeks in reviewing research papers for a variety of conferences and scholarly journals.
Reviewing or refereeing research is an important part of the process of scholarship; it keeps specious results out of the literature, helps detect mistakes, and works to keep conferences interesting and worthwhile.
Unfortunately, a surprising portion of the work submitted to journals and conferences is a hopeless mixture of dullness and incompetence. You expect to see papers that are not very good: this vale of tears, let's face it, has plenty of things that just aren't very good. All the children can't be above average.
But we're not talking about mediocrity here, but absurd, pointless papers. Papers that propose systems that no one will build, that construct and evaluate systems no one will use, that draw elaborate statistical conclusions from a few hastily-scribbled surveys filled out by graduate students. Papers in arcane pidgins that — even if you cleaned up the grammar — would remain poorly written because the sentences are so fuzzy and their arrangement so dully pedantic.
There's plenty of good research out there: unless you're a reviewer or an editor, you don't see these papers. That's what reviewers do: they try to make sure you don't need to wade through this stuff, while still making it possible for a really good study by an unknown scholar, working at some small institution or tiny company in a seldom-visited and distant place, to have the same chance of publication as the lions of the Ivy League.
But, it's not a lot of fun. And it would be a lot easier on everyone if people would simply not submit papers they know are embarrassingly bad.