|Main » Articles|
“Often wrong, always certain”, goes a saying I once heard about economists. Frankly, I hate admitting when I’m wrong or when I don’t know something (don’t try to use that against me in an argument, I’ll totally deny that it applies in that case). I force myself to do it, and I think I succeed most of the time, but it is very unpleasant.
I think the same thing applies to other people. We would rather take an “educated guess” than to say “I don’t know” and we would rather defend our original point in an argument even though halfway through we may have started wondering (very very deep down) if we’re wrong.
A few days ago, I realized that I actually have great data to test this hypothesis. In a survey experiment pilot my colleague Olga Shurchkov and I ran recently, we asked people two multiple choice questions: (1) what is the current concentration of CO2 in the atmosphere? and (2) what is the “albedo effect”? We included “I don’t know” as an answer option and used the number of correct answers as a gauge for objective knowledge about climate science (there was another question asking people to name greenhouse gases, but that one is more complicated because there are multiple correct answers).
Our sample was not necessarily representative of the US (Amazon MTurk workers), but there is definitely a wide range of various demographic and economic characteristics in our data. We never looked at what fraction of people answered “I don’t know”, but my prior was that it was low. I got really excited to have some data to test my hypothesis. I was even going to run an informal survey on Facebook, making up a fake city and asking people which continent it was located on to see how many of my friends would admit that they didn’t know.
But I decided to look at the survey data first, and frankly I was shocked. 224 out of 361 respondents (62%) admitted they didn’t know the CO2 concentration in the atmosphere (22% chose the right answer and the rest chose a wrong answer, if you’re wondering). 200 (55%) admitted that they didn’t know what the albedo effect was (20% got it right). Apparently the majority of people have no problem admitting when they don’t know something (at least on a survey). Even though my original hypothesis didn't pan out, I thought the results might be interesting to some of my blog readers.
And there you go: I was wrong.
Obviously, it’s been a while since I’ve blogged. As it turns out, I’m up for tenure review in two years, and with the publication lag being what it is (especially considering my historical rejection probabilities), I’ve been focusing on getting my working papers published. The good news is that it worked – three papers got accepted this year, and two more are under review. I finally get to work on analyzing new-ish data and putting together first drafts, which is my favorite part of the process.
I’ve also been working on Academic Sequitur (slowly but surely). We’re all set up to track new articles in 88 journals and working paper series, which is very exciting. (The website is still being built, but if you want to be notified when it’s ready for prime-time, sign up here). In the meantime, I’ve decided to post some fun facts about our current database. Keep in mind that this database isn’t representative of all research in economics/finance because we have more years of information for some journals. But for blogging purposes, it’s close enough!
First fun fact: the average econ/finance paper has 2.08 authors. About 29 percent of the papers have one author, 42 percent have two, 22 percent have three, and 5 percent have four. That covers 98.7 percent of papers. Then we get into crazy territory with papers that have 5, 10, or even 17 authors! And the record for the largest number of authors goes to…“Everything You Always Wanted to Know about Inventors (But Never Asked): Evidence from the PatVal-EU Survey” (a CEPR Discussion Paper from 2006). Let’s see if another paper comes along in the future to break that record.
Now let’s talk about the content of the articles themselves. If we don’t count word variations as unique words (“rate” and “rated”, “tax” and “taxes”, etc.), only count words that are used 3 times or more (even the internet has spelling errors!), and ignore very common English words like “the”, “I”, and “we”, the abstracts contain over 15,000 unique words. Out of these, what do you think is the most common word that economists use in their abstracts? It is…drum roll…“model”. How stereotypical, right? That is followed by (in order): “effect”, “paper”, “market”, “result”, “increase”, “country”, “policy”, “firm”, and “data”. Interestingly, “increase” is used almost 6 times more than “decrease” (which ranks 195th on the list). So maybe economics are not so dismal after all? Unless all these articles are about tax increases.
At this point, it would be pretty straightforward for us to release a product that lets you pick which journals, authors, and/or user-specified keywords you want to be notified about. But we’re going further and developing an algorithm that classifies articles into both broad subject areas (e.g., “Development economics”) and narrower topics (e.g., “credit constraints”). Text analysis is a difficult problem, especially when you’re dealing with text that’s not written in everyday English language (because there are fewer existing tools available to process the words). But we have a plan, and we’re confident that it will succeed! Shameless self-promotion over. Stay tuned.
Many charter schools appear to work quite well. Here are two quotes from two articles summarizing the research:
“sound research has shown that, when properly managed and overseen, well-run charter schools give families a desperately needed alternative to inadequate traditional schools in poor urban neighborhoods.” (NY Times, October 13, 2016)
“The briefest summary is this: Many charter schools fail to live up to their promise, but one type has repeatedly shown impressive results.” (NY Times, November 4, 2016)
Because in many cases admissions to charter schools is done through a lottery, assignment to charter schools is literally random, for students that apply. So the level of confidence in these results should be as high as it gets. There’s also no reason to think that the “one type” of charters that has shown significant results cannot be replicated elsewhere (in fact, it has). Then why do so many liberals appear to be against charter schools?
I don’t have a good answer to that question. Liberals’ resistance to charter schools in any way, shape or form reminds me of conservatives’ resistance to any gun control regulation. No matter what type of gun control legislation is proposed, their answer is always “this is a terrible idea”. They also frequently invoke a slippery slope argument – “first, the Democrats will impose more thorough background checks, next, they will take away all our guns”. My sense is that liberal voters see charter schools as a similar existential threat to public school funding. But just like in the case of gun control, to me that logic is very dubious.
We need more evidence-based education reform. Charter schools that have been shown to work seem worthy of our support. I agree with Sue Dynarski, a prominent economics of education scholar, who was quoted in the second article as saying “To me, it is immoral to deny children a better education because charters don’t meet some voters’ ideal of what a public school should be. Children don’t live in the long term. They need us to deliver now.”
I teach masters students the basics of micro- and macro-economics. When we talk about government intervention, one of the first topics is the effect of taxes in an otherwise competitive market. By this point, it’s pretty easy for them to see that taxes hurt both consumers and producers in that market because, generally, (1) buyers have to pay more for the good than before and sellers receive less in revenue than before and (2) taxes reduce the activity that is being taxed, lowering surplus for everyone. For example, if it costs a seller $1 to make a cup of coffee and every day she was selling one to a buyer who was willing to pay only $1.05 (presumably for some amount between $1 and $1.05), placing a 10-cent tax on that market will probably eliminate that transaction. This second effect is called the “deadweight loss” of taxation because losing these transactions creates only costs (to the affected buyers and sellers) and no benefits (because the government doesn’t get tax revenue and consumers/producers do not benefit from transactions that don’t happen). That doesn’t mean we should never have taxes in competitive markets: if the government puts the tax revenue to good use, then social gains can overcome the deadweight loss. It just means there’s no free lunch!
It’s important to note that the assumption here is that we don’t want to limit the economic activity itself (e.g., because it generates pollution). When we talk about “externalities” such as pollution and how taxes can be used to resolve them, I usually ask “Do taxes to correct an environmental externality create deadweight loss?” By this point, a lot of my students have learned to equate taxes with “deadweight loss”, so many will generally say “yes”. However, that is not the case (but I’ll save that for another post).
After we cover taxes, I ask my students: “Do subsidies (in the form of a payment per unit of something produced/sold) in otherwise competitive markets create deadweight loss?” I always think this is an easy question because a subsidy is just a negative tax. The answer then should clearly be “yes”, but the students are usually stumped. So I thought I would write a post about the economics of subsidies.
Unsurprisingly, subsidies work in the opposite way that taxes do: they generally benefit both buyers and sellers by raining the amount a seller receives for selling a good and lowering the amount a buyer pays. No one participating in a subsidized market has an incentive to want to get rid of the subsidy because both sides benefit! Subsidies also increase the amount of the subsidized activity – add a 10-cent-per-cup subsidy for coffee and people will drink more coffee. Someone who wasn’t willing to pay more than $0.95 for that cup of coffee may now buy it for $1 because they also get a ten-cent subsidy that offsets some of that cost. Alternatively, if the subsidy goes to the seller, the seller may lower the price to $0.93, also inducing the buyer to buy.
But this increase in economic activity is not a good thing because the additional “units” being produced and exchanged are costing more to make than the buyers value them at. The net benefit (value to consumer minus cost to producer) to society of this additional economic activity is negative because the buyer values the good less than what it costs to produce. On top of that, subsidies need to be paid for by taxes, which means possibly creating deadweight loss in another market!
One justification people give for supporting subsidies is distributional concerns. Maybe we’re losing some efficiency, but we’re making sure that (presumably poor) people can afford to buy the good in question. However, subsidies are a crude and expensive way for achieving distributional goals because they help everyone who buys in the market, rich or poor. For example, subsidizing college education will certainly help poor students, but if the subsidy is given to everyone, it becomes much more expensive in terms of the amount of revenue (and deadweight loss of taxation) that needs to be generated.
An obvious way to improve on subsidizing something for everyone is more targeted subsidies (like financial aid for poor students). However, even that is not ideal because it distorts individuals’ choices. If we start subsidizing coffee for low-income individuals, coffee will be more affordable, but people will also drink more of it relative to other goods, and it’s not clear that we (or the individuals) want that. Rather, economists advocate giving poor individuals money and letting them decide what to spend it on. That comes with its own set of issues because it creates a larger incentive to pretend to be low-income, but it also respects individuals’ choices and does not lead to unnecessary distortions.
My representative, Rodney Davis, recently introduced a health care bill "to protect people with pre-existing conditions from discrimination against insurance companies." (yes, if you think about it, that sentence is poorly written).
I just wrote to him to ask a few details about his plan. I'm sharing the letter below because it demonstrates the difficulty of ensuring that individuals with pre-existing conditions can buy affordable insurance.
"I read about your new health care bill to make sure people with pre-existing conditions can buy health insurance. I'm just curious as to what happens if insurers offer someone who has cancer insurance for, say, $50,000 per year. Would you consider that acceptable? If not, what provisions does your plan have in place to ensure that does not happen?
If your plan has limits on whether insurers can charge different prices based on pre-existing conditions, how will the plan ensure that younger and healthier people do not have a disincentive to sign up because they are being offered insurance at a price that is much higher than their expected healthcare costs?"
There are really only two ways (that I can think of) to ensure that (1) people with pre-existing conditions are not being offered health insurance only at exorbitant prices and (2) you don't create a "death spiral" where people buying insurance on the individual market are increasingly sick because the healthier people drop out due to rising prices. The first is having an individual mandate (a stick) and the second is a generous tax credit that makes buying health insurance very cheap on the margin even if the pre-credit price is very high (a carrot). I look forward to seeing what Davis's actual plan is (the "Better Way" Republican agenda does mention a tax credit).
I’ve been getting increasingly frustrated with how hard it is to keep up with new research. At some point after starting my job, I subscribed to table of contents emails for the top five + a few other econ journals. Then I noticed that, once in a while, I would see a newly published article in my research area that I had never heard about. Given that it typically takes at least 1-2 years to publish a paper in economics and that drafts are widely available as working papers, that was not a good sign.
To try to remedy that problem, I then subscribed to two working paper series: NBER and SSRN. But even that didn’t seem good enough. In environmental economics, there are plenty of good researchers who are not part of the NBER (and thus can’t release their working papers that way) and do not use SSRN. But how could I possibly remember to check all of their websites once in a while to see new papers? On top of that, my inbox was getting bombarded with abstracts of many irrelevant papers, and I was wasting a lot of my already precious time sorting through them to figure out which ones I should read.
I looked to see if there was anything out there that could help me stay up to date efficiently and sanely. I won’t bore you with the details, but places like Research Gate, Google Scholar, Mendeley, RePEc, and others fell far short of what I was looking for. So I decided to build it myself (or, more accurately, hire a programmer to build it for me) and in the process also create a tool for others who may be having the same problem. I’m calling it “Academic Sequitur”.
The idea is simple: Academic Sequitur will be a one-stop shop where you can create a “portfolio” of research to follow, whether it’s research in a specific journal, on a specific topic, or by a specific researcher. We’re starting with economics and finance but depending on how things go, we may expand to other disciplines. A beta version will be available in about 6 months, so stay tuned! And if you want to be notified of when Academic Sequitur comes out or have any other thoughts, let me know by emailing email@example.com.
(This is based on a true story, but I may have changed some details like field of study and gender to protect the student’s anonymity)
Shortly after Trump got elected president, a student made an appointment to talk to me. She was in the last year of her finance degree and had a good job lined up, but was doubting whether she should continue with her life plan in light of the election. She realized that she wanted to make a difference in the world and a career path in finance didn’t seem like a good way to do so. Instead, she was considering going to work for a women’s reproductive rights organization (I definitely changed this detail, but it roughly captures the spirit of this student’s desires).
I told her to consider sticking to finance and donating a large part of her salary to her favorite organization. Why? Because individuals who hold high-paying jobs can often make a lot more of a difference this way. Her starting finance salary would have been probably at least $120,000 a year. If she left finance and went to work for the non-profit, she would make at best $40,000 a year. But what if she donated $80,000 of her finance salary to the non-profit instead? Well, the non-profit could hire TWO people like her and she would still earn $40,000 per year, as much as she would have at the non-profit.
Of course, there are some caveats to this. She would probably have to work longer hours in finance and maybe she would enjoy it less than the non-profit job. So to stay indifferent between the two, maybe she would donate “only” $50,000. Still, the organization might prefer having that money to having her work there, especially if she didn’t have any special training.
That brings me to the second piece of advice I gave her. If, after considering the high-paying-job-plus-donations option, she still thought going into the non-profit world was better, I advised her to think about positions in non-profits where her finance training would be useful. For example, if she wanted to help low-income women, perhaps she could get involved with an organization that provides financial training to disadvantaged women or manage a non-profit’s endowment. Even though that may not have been her first choice, it would probably be more valuable to society.
So as we sit here wondering, “What the f*** do I do now?”, consider whether your salary allows you to make a substantial donation to the many organizations out there fighting the good fight. If you’re a student, don’t feel like you have to drop everything and become a full-time activist (though you should still call your Congressman once in a while and follow the non-alternative news!). First, sit down and think about how much money you can generate for your favorite organization by not working for them. Alternatively, consider which causes your skills could be useful for – a lawyer going to work for ACLU is a lot more useful than a lawyer going to build houses for Habitat for Humanity.
To be clear, I am not saying that you should take a job you find immoral or incredibly unpleasant. There is ultimately nothing wrong with leaving (or not taking) a high-paying job where you don’t feel like you’re making a difference for a low-paying job where you feel like you do. And of course we need people actually working at organizations like ACLU or Planned Parenthood (yes, I’m shamelessly promoting my favorite ones). But these organizations need money too, and if you face a high opportunity cost of joining them full-time (i.e., your salary is or will be high), consider giving them your money instead. You might not get the same pat on the back from your activist friends, but I promise you that you will be making a big difference!
Let’s talk about genetically modified organisms (GMOs). But first, let me ask you a question. Are chainsaws good or bad? That’s a weird question, isn’t it? A chainsaw can be very useful if you need to cut something, but it can also be dangerous if you’re not careful or if you deliberately attack someone with it.
Now let’s go back to talking about GMOs. As I elaborate on below, it’s just as silly to ask whether GMOs are good or bad as it is to ask whether chainsaws are good or bad. Genetic modification is a tool. If used wisely, it can provide a significant advantage over traditional plant-breeding techniques. But it can also be used for evil. So my proposal is that we stop treating all GMOs as being the same (this also goes for people who love GMOs!) and instead think about what exactly is being genetically modified.
Let me demonstrate why this is important. Two very common genetic modifications out there are to (1) make crops herbicide-resistant (e.g., "Roundup ready corn") or (2) make crops produce their own pesticides (e.g., "Bt corn"). What effect would the first modification have? Well, it’s likely to increase the amount of herbicide farmers spray on crops because now you don’t have to worry about killing the crops themselves. This may be undesirable to the extent that higher levels of herbicide are more harmful to human health (although there’s no evidence that Roundup is harmful to human health unless you are stupid enough to swallow it in high doses) and to the extent that it contributes to the creation of weeds resistant to Roundup ("superweeds"). But making crops produce their own pesticides will likely decrease the amount of pesticide farmers spray on crops because the crops are making their own (oh, and for the record, organic farmers use Bt as a pesticide all the time). That could be a significant improvement for the environment, for crop productivity and (because less pesticides are used) for human health.
Fine, but these are only the intended consequences of genetic engineering. What about the unintended ones? Well, let’s think about traditional plant breeding where you’re letting the mutations in DNA happen naturally and selecting the offspring with the best traits. We’ve done A TON of that. How else do you think your banana or your “traditional” corn got here? And we really had no idea what was being altered in the plants’ DNA. It was essentially impossible to guarantee that the new variety was different ONLY in the desirable traits. By contrast, because genetic engineering is very targeted, we can be very confident that no other changes are taking place. So it’s pretty hard to claim that genetic engineering will produce unintended consequences (at least on a systematic basis) – I would be much more worried about that traditionally bred apple you’re eating.
But, you say, these traditional varieties have been grown for hundreds or thousands of years so if there were something wrong with the crops that we developed during this time, we would know by now. That’s certainly true if a mutation made a crop poisonous such that eating a bite killed you. But if we accidentally bred something that, say, doubled your chances of developing a certain kind of cancer if eaten for prolonged periods of time, there’s a good chance no one would have noticed because they were too busy dying of other things. And many fruits and vegetables do contain toxins naturally. So enjoy those glycoalkaloids in your "non-genetically modified" potatoes!
In summary, there is absolutely no reason to think that the entire concept of genetically modifying organisms is a bad idea. By all means, we should ask if a specific genetic modification can have adverse health or environmental consequences. But let’s stop being unscientific about this whole GMO thing by saying we shouldn’t do genetic modification at all.
A while back, I posited a simple mechanism by which completely ineffective treatments can appear effective and maybe even gain prominence as "alternative” or “traditional” medicine. So then are all alternative medicines ineffective? After all, there's that famous joke: "Q: What do you call non-traditional medicine that works? A: Traditional medicine."
At first glance, there's a lot of logic to that idea. If something really works, won't it soon get incorporated into mainstream medicine? Here's a simple explanation for why the answer is "no".
In the US, non-traditional medicine can be roughly described as anything that seems like it’s supposed to make you healthier in one way or another, but with the cautionary label "This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease." If you want to “legally” be able to claim that something works, you need to have Food and Drug Administration (FDA) approval.
Because FDA approval requires clinical trials, which are expensive, private companies will only undertake such trials if they expect to profit from the results. But private companies will not be able to patent most alternative medicines because most are by definition not novel treatments but ones that have been in use for years, decades, or even centuries. And you cannot patent something that isn’t novel. Instead, a reasonable expectation is that other companies will use the results to market the same medicine and the company who did the testing will not be able to recoup the trial costs by charging more for the medicine.
Thus, testing whether alternative medicine is effective is a “public good”: society (including other companies) captures most of the benefits, while whoever does the testing bears the full cost. This implies that the private market will under-test alternative medicine. In fact, the only reason private companies would test anything that they can’t patent is for PR purposes, which is probably a pretty weak incentive.
The WRONG conclusion to draw from this analysis is that alternative medicine is effective but overlooked by the private sector. But, as my previous post makes clear, alternative medicine could just be “correlated” with feeling cured or work as a placebo. So what do we do about this? The clearest implication is to have public funding of scientific research to test which alternative medicines do and do not work.
It’s true that there is already some testing of alternative medicine. But if you search for “alternative medicine research funding”, you basically get nothing (you get much better results for "dog diabetes research funding"). And given how prevalent the use of alternative medicine is, it seems like we should be funding more research of its effectiveness. It’s worth it (up to a point, of course) to spend some money up front and either put a definitive nail in the coffin of a useless approach or discover medicine that could be incorporated into everyday medical practice. Undoubtedly, some people will keep taking “natural” medicine no matter what research says. But we should figure out what’s true and what’s not.
This is another “fun with math” post meant to impart simple math knowledge that could make the world a better place.
If you aren’t a statistician/math aficionado/empirical economist, you’ve probably never thought about what a “percentage point” is or how it’s different from a “percent”. Frankly, I hadn’t either until one of my advisers in graduate school asked if I was reporting results in percent or percentage points. The realization that there was a difference was eye-opening. Basically, a “percentage point” is always out of 100, whereas a “percent” is always relative to some baseline rate. We can also think of percentage point as telling us something on an absolute scale and a percent as telling us something on a relative scale.
Let me give you an example of why it matters. What sounds scarier, if I tell you that your probability of getting into a fatal car crash is 50 percent higher when driving over the speed limit or if I tell you that your probability of getting into a fatal car crash is 0.5 percentage points higher when driving over the speed limit? Chances are, the first one sounds a lot worse. But the 50% number is relative to some baseline crash rate, which is probably very very low (maybe 1 in a million if we’re talking about a day’s worth of driving). So multiplying that by 1.5 still leaves you pretty safe. By contrast, raising your risk of a fatal car crash by 0.5 percentage points brings it from 0.0001% to 0.5001% - more than a 5000% increase! (By the way, “%” usually refers to “percent”. If you want to talk about percentage points, you should just write it out.)
Why should you care about this difference? Because it’s often helpful to know differences in percentage points rather than percent, especially when it comes to rare events. For example, the risks of birth defects rise dramatically in percent/relative terms with the mother’s age, but the percentage points/absolute changes are actually pretty small. According to this page, 20 year old women have about a 0.19% chance of having a baby with some chromosomal anomalies, whereas 40-year-old women have a 1.52% chance. If you calculate the percent increase in risk, it’s huge, almost 800%. But the percentage point change is clearly much smaller, just 1.33.
Moral of the story - as a rule of thumb, if you want to scare or impress someone, use percent. If someone is trying to scare or impress YOU, ask them what the percentage point/raw difference is.
|Total entries in catalog: 150
Shown entries: 11-20
|Pages: « 1 2 3 4 ... 14 15 »|