Saturday, November 30, 2019

Paul Niehaus on Doing Research

Here it is. Paul Niehaus reflects on how to do research. Regular readers of my blog know that when I find good advice online, I not only share it but I emphasize my favorite things. This was really hard for this particular article. I think everything he writes is so important! My big advice is to read the article often! Bookmark it. Read it at least once a year, more when you're just starting out. 

How to come up with a research topic?  "At the end of the day, I think most good research ideas involve finding connections between these things — (1) identifying a research opportunity to (2) demonstrate something about reality that advances the (3) conversation in your field."  

The quote is from the article, but I added the numbers and will also add more thoughts. The key insight is that to come up with something new, you have to not only know the latest research in your field, but you also have to have some knowledge of the real world (either by reading the news or by paying attention to things going on in your life) and you have to know about the empirical tricks, data sets, etc. to work with. Yes, this means you're often just collecting this information and not using it. Yes, I know you're too busy writing your job market paper to think about your next paper. Yes, I know you have at least three different coauthors waiting for you to do that thing you promised to do. How can you think about next projects? Answer: It seems to me that a good researcher is always collecting and storing tricks so that they're prepared--great ideas don't necessarily come exactly when you have time to implement them. 

But Paul doesn't stop at how to come up with research topics. He also tells us how to get papers actually written. I really love the advice to fail fast. I struggle with this. It's so hard to give up on paper ideas. Maybe this is why it's especially important for people like me to have lots of ideas.  

But my favorite piece of advice: Sleep well. 

Image result for sleep, ideas, cartoon

Now, bookmark the article. Write a note in your calendar to read it again next year. And again the year after that. 

Friday, November 15, 2019

How to Publish in ReStat

Here is an excellent interview with Amitabh Chandra about his experiences as editor of the Review of Economics and Statistics

To the UConn third year paper writers and students going on the market this year, this message is especially for you: 

What surprised you the most about being an editor of a major general interest economics journal?
I never thought that the single best predictor of getting a paper accepted, would be clear and accessible writing, including an explanation of where the paper breaks down, instead of putting the onus of this discovery on the reader.

It’s my sense that a paper where the reviewer has to figure out what the author did, will not get accepted. Reviewers are happy to suggest improvements, provided they understand what is happening and that makes them appreciate clear writing and explaining. They become grumpy and unreasonable when they believe that the author is making them work extra to understand a paper and most aren’t willing to help such an author. They may not say all this in their review, but they do share these frustrations in the letter to the editor. This is one reason that I encouraged a move towards 60-70% desk-rejections at RESTAT—if an editor can spot obvious problems with clarity or identification within 15 minutes, then why send it out for review?

Of course, all of this results in the unfortunate view that “this accepted paper is so simple, but my substantially more complicated paper is much better,” when the reality is that simplicity and clarity are heavily rewarded. We don’t teach good writing in economics—and routinely confuse LaTeX equations with good writing—but as my little rant highlights, we actually value better-writing. So this is something to work on.

And a related point: 

Is the revise and resubmit process working well for you? If so, what is making it work so well? If not, how could it be improved?

At the Review of Economics and Statistics, we moved to more of a “conditional contract” approach with R&R decisions. In other words, if we gave you an R&R decision, we were basically saying, “do these things and we’ll take the paper.” This preserves everyone’s time, and speeds up the review process but it does come at a cost: we give up the option to publish papers that may improve as a result of the first-round comments, but where we (editors) thought that author’s setting or data did not permit this improvement. This is where subjectivity creeps in: an author who wrote a confusing paper may not be viewed as being up to the task of simplifying it. Was the initial submission confusing because of not being taught how to write well, or is this just a muddled approach? Here’s where an editor’s knowledge of an author can come in. But this is also highly subjective and privileges networks.

I think this last bit is so important. Whether the editor believes you up to the task of successfully revising your paper is subjective. This implies that he/she will use imperfect signals of your ability when making decisions. Of course, clarity of writing is one signal, but I would also add that if your tables are messy, you have typos throughout, you didn't carefully explain your data selection criteria, etc., then all of these may also be used as signals of your general sloppiness with your paper analysis. Another potential issue. If the editor has seen you present papers at conferences or make excellent comments at these conferences, this may also (at least subconsciously) be used as a signal of the likelihood that you are able to complete a tough request for revision. 

Thank you, David Slusky, for your great journalism!  

Sunday, November 10, 2019

Testing for Heterogeneous Impacts?

I've seen it many times before. After showing your baseline results, the thing to do in applied micro research is to split your sample--based on education, gender, age, etc.--to see if impacts vary across different populations. But let's say the estimated coefficients are about the same in the two samples, but one estimate is statistically different from zero while the other isn't. What to do then? Here's what not to do: Claim they are different, that there's an impact in one population but none in the other!!! Ok, so that's a typical mistake but nothing new. 

What I didn't know (but that seems so obvious after reading this twitter thread)? Finding one statistically significant impact in one population but not in another, even when effects are constant, is especially likely when effects are moderate, not too big and not too small. Check out the cool simulation of this here.  

Sunday, November 3, 2019

How to Make a Specification Chart

There are so many little decisions we have to make when doing applied micro search. How should we measure our variables of interest? Which sample should be our baseline sample? Which variables should we control for--beyond the obviously important ones? The answer (hopefully) to many of these questions is "it probably doesn't matter so much". If that's the case, then ideally, we'd show our readers that it really doesn't matter. This is great in theory but if there are that many mini-decisions to make, how can we show all of this in 30 page paper? Hans Sievertsen (@hhsievertsen) recently gave us the answer in a tweet. The graph looks amazing. So much information! And he even provides the code he used to make the beautiful picture. 

Image