Here is an excellent interview with Amitabh Chandra about his experiences as editor of the Review of Economics and Statistics.
To the UConn third year paper writers and students going on the market this year, this message is especially for you:
What surprised you the most about being an editor of a major general interest economics journal?
I never thought that the single best predictor of getting a paper accepted, would be clear and accessible writing, including an explanation of where the paper breaks down, instead of putting the onus of this discovery on the reader.
It’s my sense that a paper where the reviewer has to figure out what the author did, will not get accepted. Reviewers are happy to suggest improvements, provided they understand what is happening and that makes them appreciate clear writing and explaining. They become grumpy and unreasonable when they believe that the author is making them work extra to understand a paper and most aren’t willing to help such an author. They may not say all this in their review, but they do share these frustrations in the letter to the editor. This is one reason that I encouraged a move towards 60-70% desk-rejections at RESTAT—if an editor can spot obvious problems with clarity or identification within 15 minutes, then why send it out for review?
Of course, all of this results in the unfortunate view that “this accepted paper is so simple, but my substantially more complicated paper is much better,” when the reality is that simplicity and clarity are heavily rewarded. We don’t teach good writing in economics—and routinely confuse LaTeX equations with good writing—but as my little rant highlights, we actually value better-writing. So this is something to work on.
And a related point:
Is the revise and resubmit process working well for you? If so, what is making it work so well? If not, how could it be improved?
At the Review of Economics and Statistics, we moved to more of a “conditional contract” approach with R&R decisions. In other words, if we gave you an R&R decision, we were basically saying, “do these things and we’ll take the paper.” This preserves everyone’s time, and speeds up the review process but it does come at a cost: we give up the option to publish papers that may improve as a result of the first-round comments, but where we (editors) thought that author’s setting or data did not permit this improvement. This is where subjectivity creeps in: an author who wrote a confusing paper may not be viewed as being up to the task of simplifying it. Was the initial submission confusing because of not being taught how to write well, or is this just a muddled approach? Here’s where an editor’s knowledge of an author can come in. But this is also highly subjective and privileges networks.
I think this last bit is so important. Whether the editor believes you up to the task of successfully revising your paper is subjective. This implies that he/she will use imperfect signals of your ability when making decisions. Of course, clarity of writing is one signal, but I would also add that if your tables are messy, you have typos throughout, you didn't carefully explain your data selection criteria, etc., then all of these may also be used as signals of your general sloppiness with your paper analysis. Another potential issue. If the editor has seen you present papers at conferences or make excellent comments at these conferences, this may also (at least subconsciously) be used as a signal of the likelihood that you are able to complete a tough request for revision.
Thank you, David Slusky, for your great journalism!
No comments:
Post a Comment