Saturday, February 7, 2009

The River Tam Blues

As I mentioned in my last post, I've been having some....completion issues. As I sat and thought about this last night, I realized that my completion issues resulted from a lack in my normal zeal for science. I would get part way through the post and realize that I was simply too demoralized to finish. So why am I demoralized?

The main part is that science hasn't been a lot of fun recently. I've had a variety of papers going through the review process over the past year and the crazy blowback I've been getting has really started to take a toll. Don't get me wrong, I'm a big girl. I know how the review process works, but it has definitely seemed stranger and angrier lately. Every paper is a multi-round trench warfare-type slugfest, often lasting double (if not triple) the typical time from submission to acceptance for that journal. (I won't go into all the details because one of my half-finished posts is about the bizzaritude I've been experiencing). The Professor Chaos part of my personality knows that the weird blow back is actually a good sign for my research direction, but the River Tam part is feeling a little battered and bruised.

You might be wondering why my bitter reviewer battles might give me the impression that I'm on a good research direction? Let me digress for a moment. A while back, Drugmonkey made an open meme on creating your own index/law. When I was a graduate student a big name professor told me a story about one of his papers that is now a citation classic. He said that the responses to his work fell in one of two categories: it was wrong or it was trivial. Over the years, I have listened to other big names say similar things and based on this data, I have created the novelty test - a modified version of what the big name told me when I was a young whelp. I present: The Professor Chaos Novelty Litmus Test. The Professor Chaos Novelty Litmus works as follows: if the response to your work contains all of the following three elements (either separately from different individuals or all together from one individual) there's a high probability that your work ranks high in novelty:

1) "This work is wrong". This can take the form of nitpicky details that actually don't affect your results - which you may already have demonstrated in said paper - that are blown up as if you committed scientific fraud. This is the low - but still indicative -end of the "wrong" scale. Or it can take more bizarre turns resulting in weird statements that make you feel like you just had a stroke, like: "The author's logic about the nth dimensional constraint on the ecology of dogs rests on a critical assumption that dogs behave like dogs, but trees don't behave like dogs. This obviously invalidates this entire study on dog ecology." In my opinion, the more bizarre the reason, the more your work has messed with a reviewer's preconceived notions and therefore it is more likely you are truly on to something. Congratulations.

2) "This is not novel because it has been done before by a really big name". The key for this one to count as a positive novelty result is that the paper must not actually exist. There are two forms of this response. The first is an explicit reference to the "original" paper (i.e. you could easily locate the paper that is being referred to), but upon examination, the paper may be on dogs, but that's the end to the overlap. This ranks low on the "not original scale",  because its always possible the reviewer honestly didn't understand your paper.  On the high end of this scale is the "you have displayed your complete lack of competence in this area because you have missed the fundamental paper that has already published your idea which was published by either big name X, big name Y, big name Z, big name A. But I don't have the time to tell you which one of them did this, or which of their over 100 papers is the one you should care about. I won't even give you hints on journal or year. But boy it makes you look bad that you didn't know about this paper." I used to think this was just sloppy reviewing, but in my cynical older age, I am beginning to suspect that these are actually either conscious or subconscious attempts to throw the novelty of the paper in to doubt when the reviewer actually has no evidence of such...especially since I have on more than one occasion now gone through the multitude of papers by the list of big names and never found that my work had been done previously.

3) "This is just trivial". In contrast to #2 which may not debate the importance of the idea only that its already been done, this is the 2+2 argument. Your idea is so fundamentally trivially true that its not even worth publishing. Variants of this include: Everyone already knows this, its common knowledge, even if there is no record of the idea ever being tested - or maybe even proposed - in the literature. What's important about this critique is that you can go to the top journals in your field and pull out numerous, highly regarded papers published in the last 2 years that clearly ignore this "trivially true" statement about how the world works and in fact are actually operating on an assumption that the world does not work that way at all. You cannot, however, find any papers that actually are using the "trivially true" idea that everyone already knows.

So that's it, the Professor Chaos Litmus Test for Novelty. This is really only an indicator and not a measure of novelty - but the more absurd the responses the higher the likelihood that you're on to something good. Oh, and you get extra bonus points for receiving a review or other response that actually says all 3 of the above points.

10 comments:

Unknown said...

Great post -- I expect several if not all of these responses when me first paper goes out.

There will definitely be the "this is trivial" critique. There are several ideas that have been accepted as TRUTH in my field which have never actually been demonstrated in the literature. The I ideas have been stated and restated so many times that everyone assumes that they're true, but there is NO evidence to back it up.

"BigWig did this years ago" -- several BigWigs have made a half-assed sideways attempt to sort of approach providing some evidence for these ideas. They're usually hidden in larger studies as "interesting observations" that are never pursued to figure out what's actually going on.

"This is Wrong" -- this gets back to the "this is trivial" critique. There are several competing TRUTHS that are floating around without any experimental backup, so reviewers will probably just pick their favorite pet TRUTH and hack away at that.

I've been dreading the review process on this one -- thanks for your positive take!

madscientist said...

I have had many, many wacky reviews that I just don't understand. The worst thing that I ever had to deal with was a reviewer who, and am quite sure, lost his marbles.

He was reviewing a paper by my graduate student. The paper presented modeling results from a relatively new model that we had just written. In the first round, he contacted the editor for a copy of the modeling paper (which was published and he could have gotten on-line). The editor contacted us. We downloaded the version from the web and sent it to the editor. The review came back pretty much slamming the modeling paper. It was criticizing a common assumption that is used by 99.9% of the people in the field (I only know of 2 people who question the assumption - one of which was reviewing my student's paper). The paper was rejected.

We resubmitted with an explanation of this common assumption that everyone in the field makes and that every other model like ours uses the same exact assumption.

The reviewer came back with extremely harsh language about how, just because everyone does it, doesn't mean that it is right. Here is a direct quote:

If they thought the model was based on a “not bad assumption”, they should have stated
it in the paper and not only in the Reply. If they truly believed that their model was self-
consistent, they should have defended it in the Reply. The inconsistency between the two
is unethical for any scientist. I would like to call for disciplinary action from JGR on such
behavior. Does JGR have a mechanism to deal with this type of problem?

This goes on for 3 more rounds, in which I cite BOOKS in which the assumption is discussed and used, data sets that rely on the assumption being correct, and finally, I show profiles in the atmosphere that prove that it is a valid assumption.

Eventually the editor gets a 3rd referee and the paper is accepted with no arguments.

I pretty much know who the guy is who reviewed the paper. In order to prove how wrong I (and the whole community) was, he wrote a large code and presented the results at a conference. It turns out that we were all right. As he finished his presentation that was a lot of set up of the argument, and about 1 minute of conclusion, I got got to ask him "so, what you are basically saying, is that this is pretty much equal to that, and that an assumption of equality is fine." "Uh..... yes." "Thank you."

F*cker.

Ok, my blood pressure is now going back down....

yolio said...

Great post. I had this experience with the first paper I ever wrote. I was prepared for criticism, but instead what I got was this mystifying set of reviews that were all some variant on "it is novel, but lacks a certain je ne sais quoa" (sp?). I nearly dropped out of science, I was so demoralized by the process. Fortunately, I am very stubborn.

It is really good to hear about others having the same experience.

Phagenista said...

I had a manuscript that was repeatedly called trivial and that it had been done before (it hadn't). It was accepted in the 5th journal submission and subsequently called a must read by Faculty of 1000. I'm buying into the Professor Chaos Novelty Litmus Test!

Drugmonkey said...

you just diagnosed my program as total genius, thanks holmes!!!

Anonymous said...

River Tam, this is a totally brilliant motherfucking post!!!! Way to go, dude!

Anonymous said...

I had a review where one 'lost his marbles' asshole went off on me with NO DATA WHATSOEVER contrary to my results, NO CITATIONS, NO NOTHING... I swear it like reading a page from the Bible with a storyline of why my data is sooooooo wrongz. Noah did it THIS way and not the way YOU show (with data, pretty figures, rejected Ho, etc). Those kinds of reviews are easy to attack because the person is on crack!

I really hate it when the fuckers talk in the "truthiness" crap lingo... I want to kick something.

And the "big wig did it already" is another sure-bet for my papers. WHERE? Proc of the No Name Shit Meeting from Fucking Way Back before Papyrus.

Maybe we need "rejected paper bingo"?

Prof-like Substance said...

Oh, I have one more to add. When one reviewer fixates on one part of the paper that is not even the main story. This usually results in demands of more data and further study of sub-sub-section X. I have had a reviewer grab onto a section of a paper like a pitbull and refuse to let the paper go forward if their comments were not dealt with to their satisfaction. It took the editor of the journal giving us the green light, without doing two more months worth of work to satisfy the reviewer that side-point x was addressed the way they wanted it to be, to get the paper out of the death-grip.
Great Post, looking forward to the rest!

Becca said...

bless you for this post.
I just had a relatively heated (although goodnatured) discussion with my advisor about novelty. I think maybe I tend to underestimate my own creativity.

And 'how much novelty is needed for x paper in z type of journal' is definitely rather mysterious at the early grad-student stage. I'm way too far along in my grad studies to be this perplexed by it.

Professor Chaos said...

I have to admit that the response to this post caught me by surprise, inspired an interesting discussion with General Disarray, and (perhaps) a follow up post.

Thanks to everyone for sharing your stories and support. It helped remind me why I love blogging!