Let me know where I account for population size in constructing standard errors.
The very state of economics, ladies and gentlemen.
Don't they teach finite population correction factors at directional state schools? jfc
Let me know where I account for population size in constructing standard errors.The very state of economics, ladies and gentlemen.
Don't they teach finite population correction factors at directional state schools? jfc
hey didn't they teach that finite population correctional factors depend on strong distributional assumptions that make no sense for economics data?
hey didn't they teach that finite population correctional factors depend on strong distributional assumptions that make no sense for economics data?
LOL, they are derived from elementary combinatorics. Do you have any clue what you're talking about?
The only "distributional assumption" involved is the central limit theorem.
The Datacolada post completely misses the point of the QJE paper and the fact that EJMR doesn't realize this is a sign how everyone here is a regmonkey
I wouldn't go that far, but I think you're broadly right. There are several points of the QJE paper.
One is that people have been using standard errors which are too small in a way that empirically matters for a large fraction of published papers. This point survives the critique, because everyone uses Stata.
A second is that this is because of issues with leverage which often stem from the design, which make it so that the choice of how to estimate SEs matters. This point is not disputed.
The third is that randomization inference is often a superior option to robust SEs. The point of the blog post is that this point deserves a caveat, because you can instead use different robust SEs that are more appropriate to small sample situations.
So it doesn't really amount to invalidating the QJE paper. It just adds detail and nuance to what the QJE paper says.
Eh the point of the blog post is also that randomization tests are error prone
The Datacolada post completely misses the point of the QJE paper and the fact that EJMR doesn't realize this is a sign how everyone here is a regmonkeyI wouldn't go that far, but I think you're broadly right. There are several points of the QJE paper.
One is that people have been using standard errors which are too small in a way that empirically matters for a large fraction of published papers. This point survives the critique, because everyone uses Stata.
A second is that this is because of issues with leverage which often stem from the design, which make it so that the choice of how to estimate SEs matters. This point is not disputed.
The third is that randomization inference is often a superior option to robust SEs. The point of the blog post is that this point deserves a caveat, because you can instead use different robust SEs that are more appropriate to small sample situations.
So it doesn't really amount to invalidating the QJE paper. It just adds detail and nuance to what the QJE paper says.
all of this is econometrics 101
thanks qje
The Datacolada post completely misses the point of the QJE paper and the fact that EJMR doesn't realize this is a sign how everyone here is a regmonkeyI wouldn't go that far, but I think you're broadly right. There are several points of the QJE paper.
One is that people have been using standard errors which are too small in a way that empirically matters for a large fraction of published papers. This point survives the critique, because everyone uses Stata.
A second is that this is because of issues with leverage which often stem from the design, which make it so that the choice of how to estimate SEs matters. This point is not disputed.
The third is that randomization inference is often a superior option to robust SEs. The point of the blog post is that this point deserves a caveat, because you can instead use different robust SEs that are more appropriate to small sample situations.
So it doesn't really amount to invalidating the QJE paper. It just adds detail and nuance to what the QJE paper says.
hey didn't they teach that finite population correctional factors depend on strong distributional assumptions that make no sense for economics data?
https://stats.stackexchange.com/q/514259
Other than existence of first and second moments, explain which distributional assumptions are being made.
The Datacolada post completely misses the point of the QJE paper and the fact that EJMR doesn't realize this is a sign how everyone here is a regmonkeyI wouldn't go that far, but I think you're broadly right. There are several points of the QJE paper.
One is that people have been using standard errors which are too small in a way that empirically matters for a large fraction of published papers. This point survives the critique, because everyone uses Stata.
A second is that this is because of issues with leverage which often stem from the design, which make it so that the choice of how to estimate SEs matters. This point is not disputed.
The third is that randomization inference is often a superior option to robust SEs. The point of the blog post is that this point deserves a caveat, because you can instead use different robust SEs that are more appropriate to small sample situations.
So it doesn't really amount to invalidating the QJE paper. It just adds detail and nuance to what the QJE paper says.
I had similar views initially, but then someone here pointed out that a paper saying that you should use [em]reg y x, vce(hc3)[\em] would not have ended up in QJE. If randomization isn't superior to HC3, then a blog post would have been sufficient.
The Datacolada post completely misses the point of the QJE paper and the fact that EJMR doesn't realize this is a sign how everyone here is a regmonkeyI wouldn't go that far, but I think you're broadly right. There are several points of the QJE paper.
One is that people have been using standard errors which are too small in a way that empirically matters for a large fraction of published papers. This point survives the critique, because everyone uses Stata.
A second is that this is because of issues with leverage which often stem from the design, which make it so that the choice of how to estimate SEs matters. This point is not disputed.
The third is that randomization inference is often a superior option to robust SEs. The point of the blog post is that this point deserves a caveat, because you can instead use different robust SEs that are more appropriate to small sample situations.
So it doesn't really amount to invalidating the QJE paper. It just adds detail and nuance to what the QJE paper says.I had similar views initially, but then someone here pointed out that a paper saying that you should use [em]reg y x, vce(hc3)[\em] would not have ended up in QJE. If randomization isn't superior to HC3, then a blog post would have been sufficient.
The main result of the paper is that the randomistas and their AER papers all have the wrong standard errors. Whether that's worth a QJE is up for debate.
Let me know where I account for population size in constructing standard errors.The very state of economics, ladies and gentlemen.
Don't they teach finite population correction factors at directional state schools? jfchey didn't they teach that finite population correctional factors depend on strong distributional assumptions that make no sense for economics data?
The confidence of this guy cracks me up. Talk about Dunning-Kruger.