### Science

Recently a major economics paper was found to contain basic excel errors, among other problems, and an amusing storm of controversy is growing around the paper. The controversy arises because the paper was apparently quite influential in promoting the most recent round of austerity politics in the western world, and the authors themselves used it actively in this way. The authors even managed to find a magic number – 90% – at which government debt (as a proportion of GDP) throttles growth, a threshold that many small government activists and “sustainable deficit” types have been seeking for years. It’s like manna from heaven!

There’s been a lot of hilarity arising from this, about how woeful the economics field is and about how vulnerable policy-makers on crucial issues like government spending can be to even quite terrible research that supports their agenda. But there has also been some criticism on statistics and academic blogs about the use of excel for advanced analysis, and what a bad idea it is. Andrew Gelman has a thread debating how terrible excel is as a computational tool, and Crooked Timber has a post (with excellent graphic!) expressing incredulity that anyone would do advanced stats in excel. While I agree in general, I feel an urgent need to jump to the defense of MS Excel.

MS Excel is great. It’s much, much more convenient than a calculator, and it can be used to do quite complex calculations (as in sums and multiplications) in multiple directions that would take ages on a calculator. On most computers now the calculator is buried or, if you’re a windows user, crap, and if you need anything more than addition it’s much more convenient to drag out excel. Sure it takes a moment to load compared to your calculator function, but it is so much easier to compare numbers, to calculate exponents and logs, and to present simple results in excel than in a calculator. As a simple case in point: if you get regression coefficients from Stata you can copy and paste them into excel and exponentiate to get relative risks, etc.; then you copy the formulas below, run a new regression model (with, e.g. random effects that weren’t in the previous one) and paste the results to enable you to compare between models quickly and easily. Similarly, if you’re checking a paper to see if they calculated odds ratios or relative risks, you can chuck those numbers into excel and do the comparisons with the contingency table right there in front of you. It offers a simple, graphically convenient way to visualize numbers. This is especially useful when the task you’re approaching is conceptually very simple (like a contingency table) but takes a bit of time to do on a hand calculator, and takes a bit of time to convert to the file formats required in Stata or R. In the time it takes me to think about how to structure the problem, input four lines of data to R, and then write the code to calculate the odds ratios, I can do the whole thing in excel, have the contingency table in front of me to check I’ve made no transcription errors from the paper, and fiddle quickly with changing numbers.

If you’re doing cost-effectiveness analysis in TreeAge (shudder) or R, excel is a really useful tool both for outputting results to something that is vaguely attractive to use, and for doing ballpark calculations to check that your models are behaving reasonably. This is especially useful if you’re doing stochastic Markov models, that can take hours or days to run in TreeAge, because you can’t trust software like that to give you the correct answer if you try to treat your stochastic model as a simple decision tree (because of the way that TreeAge faffs around with probability distributions, which is non-intuitive). Make a few simple assumptions, and you can do approximate calculations yourself in excel, and fiddle with key numbers – cohort size or a few different parameters – and see what effect they have.

Recently I was helping someone with survival analysis and she was concerned that her definition of time to drop out was affecting her results. She conducted a sensitivity analysis in Stata to see what effect it was having, and although with correct programming she could have produced all the material she needed in Stata, the time it takes to do this and debug your code can be time-consuming if you aren’t a natural. It’s much easier with modern machines to just run the regression 10 times with different values of drop-out time and plot the output hazard ratios in excel.

So, I think excel is a very useful tool for advanced modeling, precisely because of its ease of use and its natural, intuitive feel – the properties that recent excel bashers claim make it such a terrible device. While I definitely think it should not be used for advanced models themselves, I find it a hugely valuable addition to the model-building process. Reproducible code and standardized tools are essential for publishable work, but unless you are one of those people who never does any fiddling in the background to work out what’s going on in your model, excel will turn out to be your go-to tool for a wide range of support tasks.

In any case, the bigger problem with Rogoff and Reinhart’s work was not the excel error. Even if they had got the excel code right, their results would still have been wrong because their modeling method was absolutely appalling, and should never have seen the light of day, even at a conference. The main flaws in their work were twofold:

• They binned years together, essentially giving different years different weights in the final model
• They stripped the years out of their time series context, so crucial information contained in the time ordering of deficits and growth was lost

I think the second flaw is the most specifically terrible. By using this method they essentially guaranteed that they would be unable to show that Keynesian policies work, and they stripped the cause-effect relationship from all data collected in the Keynesian era (which lasted from the start of their data series to about 1970). In the Keynesian era, we would expect to see a sequence in which deficit increases follow negative growth, so unless the negative growth periods are very short and random, Reinhart and Rogoff’s method guarantees that this looks like an association between negative growth and higher deficits. If Keynesian policies actually work, then we would subsequently see an increase in growth and a reduction in deficits – something that by design in Reinhart and Rogoff’s model would be used to drive the conclusion that higher debt causes lower growth.

In short, no matter what package they used, and no matter how sophisticated and reproducible their methods, Reinhart and Rogoff’s study was designed[1] to show the effect it did. The correct way to analyze this data was through the presentation of time series data, probably analyzing using generalized least squares with a random effect for country, or something similar. Using annual data I think it would probably be impossible to show the relationship between debt and growth clearly, because recessions can happen within a year. But you could probably achieve better, more robust results in excel using proper time series data than you could get in R from Reinhart and Rogoff’s original method.

The problem here was the operator, not the machine – something which should always be remembered in statistics!

—-

fn1: I use the term “was designed” here without any intention to imply malfeasance on the part of the authors. It’s a passive “was designed”.

The Yellow Dragon can use Stinking Cloud at will

Today it was 26C in Tokyo, and we had our first taste of this year’s yellow dust, the strange and nasty pollution that tends to drift over Japan from China during spring and summer. Today’s was the worst I have ever seen in 5 years in Japan – the above photograph, taken from my ground floor balcony, shows the sky at about 3pm today, just after the cloud reached us. Apparently in Matsue, in Western Japan, visibility was down to 5 km. In case this seems like a strange thing to care about, let me assure you this “weather” is not pleasant: it causes sneezing, eye irritation, headaches and drowsiness in many people when it is at its worst, and I think some towns in Kyushu issued alerts that would cause some people to stay inside (especially those with respiratory problems). The US army monitors this phenomenon in Korea and issues regular warnings. Of particular recent concern is the increasing concentration of what the Japanese call “PM2.5,” very small particles of pollutants of size less than 2.5 microns, which seem to arise from industrial pollution and smog, and have specific associated health concerns. According to the Global Burden of Disease 2010, Ambient PM Pollution is the 4th biggest cause of lost disability-adjusted life years in China, and ranks much higher as a cause of years of life lost than of years of disability. By way of comparison it is ranked 16 in Australia and 10 in the USA.

Some part of the yellow dust problem is natural, due to sandstorms in the interior of China, but in the past 10 years the problem has become worse and its health effects more significant. No doubt part of the concern about its health effects arises from greater awareness, but there is also a confluence of factors at work in China that create the problem: desertification, soil erosion and pollution, and industrial pollution due primarily to coal power and transport. It’s becoming increasingly clear that as China develops, it needs to make a shift away from coal power and personal transportation, and it needs to do it soon. No matter how bad the yellow dust is in Japan, it has become very bad in China, and concern is growing about the seriousness of its health and economic effects.

This puts China on the horns of a dilemma. Development is essential to the improvement of human health, but the path China has taken to development, and the rapidity of its industrial and economic growth, are seriously affecting environmental quality. It’s possible that China is the canary in the coalmine of western development, and may be the first country to find its economic goals running up against its environmental constraints – and this despite a rapid slowing in population growth. China is going to have to start finding ways to reverse desertification, soil erosion, and particulate pollution, because it cannot afford to continue losing marginal farmland, degrading the quality of its farmland, and basing its industrial and urban growth on highly-polluting fossil fuels.

This raises the possibility that China needs to introduce a carbon tax (or better still, a carbon-pricing system) for reasons largely unrelated to global warming. A carbon pricing system with options for purchasing offsets, linked into the EU market, would potentially encourage reforestation and reductions/reversals in the rate of desertification; it would also provide economic incentives for investments in non-fossil fuel-based energy sources, probably nuclear for the long term and renewables for the short term. The government, by selling off permits, would be able to raise money to help manage the infrastructure and health needs of the poorest rural areas most in need of immediate development. These effects are important even without considering the potential huge benefits for the world from China slowing its CO2 emissions. I notice I’m not alone in this idea; Rabett Run has a post outlining the same environmental issues, and suggesting that there are many direct economic and social benefits of such a system.

This is not just of practical importance to China, but it’s rhetorically a very useful thing to note: that a lot of carbon sources (and most especially coal) have huge negative health and social consequences in their own right; raising the cost of using them and finding financial incentives to prevent or reverse deforestation is of huge benefit for a lot more reasons than just preventing runaway climate change. It would be cute indeed if China’s immediate economic and environmental problems became the cause of strong action to prevent climate change; on the other hand, it would be very sad if the focus on the AGW aspects of carbon pricing – which are a shared international burden rather than a national responsibility – led China’s decision makers to miss the other vital environmental problems it can address. Especially if failure to address those other environmental problems caused China’s economic growth and social liberalization to stall or fall backwards.

If any country is going to run up against environmental limits to growth, it is China; and if China can avoid that challenge, and the social and health problems it will cause, then there is great hope for the future of the planet. So let’s hope the Chinese can come to terms with their growing environmental challenges as adroitly as they have dealt with some of their others … and if their efforts to tackle those problems will benefit the rest of the world too.

This week a student and I published an article in PLOS ONE examining the relationship between healthcare-related expenditure and financial catastrophe in Bangladesh. Because PLOS ONE is an open access journal it is possible to read the entire article free online, here. Our study was a statistical analysis of data from a probability-sampled survey of households in Rajshahi, an urban area in Northwest Bangladesh. We collected data on their self-reported illness, household consumption and healthcare-related payments, and used it to estimate the prevalence and risks of financial catastrophe.

Bangladesh doesn’t really have any effective risk-pooling mechanisms, and a large portion of all health financing is derived from direct payments by individuals, usually referred to as out-of-pocket (OOP) payments. The lack of risk-pooling mechanisms mean that households with limited savings are at risk of financial catastrophe from unexpected healthcare costs, and may have to use a wide range of quite unpleasant coping mechanisms to deal with the costs. Our research project aimed to identify the drivers of costs, factors associated with financial catastrophe, and the coping mechanisms used to deal with high costs.

These kinds of research projects have a lot of challenges, and are necessarily flawed as a result. In low-income nations like Bangladesh it is difficult to assess wealth directly, since households often obtain income in kind or through bartering or intensive production (family gardens, etc); and often official income is not declared in order to avoid taxes or other costs. This is usually dealt with through assessing household consumption, rather than income, adjusting for fixed and productive assets. It’s also difficult to assess illness, which is usually done through self-report, and obviously also medical expenses can be hard to keep track of. There is an extensive body of literature on how to deal with those problems though, and we used mostly quite standard methods to handle them. Despite the obvious limitations in such a survey, I think this one presents fairly robust results.

We found a high prevalence of financial catastrophe, with an average of 11% of household consumption spent on healthcare and 9% of households facing financial catastrophe under our definition. Financial catastrophe was much more likely in the poorest households, even though these households spent considerably less on healthcare, and financial catastrophe was also associated with inpatient service use. Chronic illness was associated with higher OOP payments. Bangladesh is currently passing through the “epidemiological transition,” in which chronic non-communicable disease (NCD) prevalence is rising, but infectious diseases remain a significant problem, so the finding that chronic illness is associated with increased OOP payments is concerning: with a baseline proportion of their income already consumed by such illnesses, households will be less able to adapt to unexpected sudden illness or injury, both of which are relatively common in low-income countries compared to high-income countries.

Our findings suggest that Bangladesh needs to move rapidly to implement and scale-up risk-pooling mechanisms; deal with problems in public facilities that mean they don’t seem to be protective against financial catastrophe even though they are ostensibly free or heavily-subsidized; and prioritize NCDs in its health policy agenda. We’re currently conducting more research on disease-specific costs, coping mechanisms, and other aspects of the health-financing challenges facing Bangladesh. Other countries in Asia are moving towards universal health coverage (UHC) and Bangladesh lags some of them; but with care, a little reform, and some coordinated action to target NCDs, there’s no reason that despite its poverty Bangladesh can’t follow in the footsteps of other countries like Vietnam in reducing risk of financial catastrophe and improving healthcare access for the poorest members of its population.

As an aside, 9% is a very high prevalence of financial catastrophe, but I’d be interested to see how it compared with the USA (which also doesn’t have widespread and effective risk-pooling mechanisms). I don’t think the research is done the same way for US systems as in low-income countries, but there appears to be some evidence that financial catastrophe can be high, at least amongst the poor. For example, this New England Journal of Medicine article suggests that Medicare provides limited protection against financial catastrophe, and shows figures indicating that 4% of recipients pay >$5,000 on medical expenses in any one year, which would probably qualify them for financial catastrophe (since most Medicare users have low incomes). I would be interested to see the rates of financial catastrophe amongst the uninsured in the USA, and to compare them before and after Obamacare is introduced, but I don’t think research on the topic is done in the same way in the high-income countries, so I doubt it will be possible. Although health insurance (private or public) is supposed to protect against unexpected medical expenses, it can still be ineffective, and furthermore access to health insurance enables people to purchase healthcare they might otherwise have neglected, which could put them at risk of financial catastrophe where the insurance system fails to provide adequate coverage. Obamacare is going to extend no-frills coverage to the currently uninsured, but this doesn’t mean they’ll get benefits sufficient to prevent financial catastrophe, so it will be interesting to see whether it meets both of the goals of a health-financing system (improving access and reducing financial risk), just one of them, or neither. And if it fails on either or both of those goals, does this mean that Bangladesh will achieve effective UHC before the USA? That would be interesting … but first Bangladesh needs to start the move toward UHC, and hopefully this research will provide useful information and a little impetus in support of that process … Next week sees the release of the Ken Loach movie The Spirit of ’45, which describes the UK’s attempts to implement socialism through the ballot box between 1945 and 1951. Ultimately a failed project, this revolution has left one enduring and much-loved symbol, the UK National Health Service (NHS). In the same month as the release of Loach’s movie, the NHS is going to undergo what are generally touted as the largest reforms in a generation, the Health and Social Care Bill, which see significant market reforms introduced to the NHS, and a major reorganization of its hierarchy. This is all happening against a backdrop of unprecedented government austerity, recent reports finding significant failings in the way the NHS cares for patients, and a Conservative-run government with a very strong ideological bent towards radical experimentation with the UK’s institutions. It is also happening against the backdrop of a worldwide movement towards universal health coverage (UHC) which is even taking hold in the USA – given this global movement, and the UK’s central role as a model for that movement, it’s unlikely that even the most ideological of governments is going to attack the basic principle of the NHS: to provide healthcare on the basis of need, not ability to pay. It’s easy to make predictions about how these reforms will turn out, without even analyzing the policy, because a cynical outside observer of the UK can always fall back on three simple principles: nothing in the UK works very well, the British government is terribly incompetent regardless of its ideological stripe, and you can’t improve healthcare by reducing the amount you spend on it[1]. However, I’ve written before about what I think will happen as a result of the new Bill, and the specific good and bad points I see in it; I won’t repeat these in detail here. For those interested in the Bill itself, the Guardian gives an outline of its main points, and since I wrote my post the Bill has been beefed up a little. The revised Bill includes a sneaky little clause that supposedly forces clinical commissioning groups (CCGs) to make all health services they purchase subject to tender, rather than allowing existing NHS services to be preferred providers. CCGs are organizations supposedly formed by doctors which are charged with disbursing government money to providers of healthcare – they are the main purchasers of health services after the reforms have been passed. By forcing them to open all non-emergency services to tender, the Bill will (it is claimed) force existing NHS hospitals and GPs to compete with private services for government money, ideally driving down costs. It’s not clear to me how the contracts between these services and the CCGs will be negotiated, and this aspect of the reforms gets my spider-senses tingling, because it just stinks of “a potentially good idea done badly.” Some of the background on the way the Health and Social Care Bill forces CCGs to use competitive tendering is presented in this opinion piece (but beware, huge amounts of this piece are just factually wrong or very misleading, so take everything it says with a grain of salt). Below are a few reasons why I think this particular competition reform is going to fail. • It puts family doctors in charge of paying family doctors: a Clinical Commissioning Group is meant to be a group of general practitioners (GPs, or family physicians in the American parlance) who will be given money by the government with the task of purchasing all health services for patients in their area. This is supposed to put the health service back into the hands of GPs. The problems here are two-fold and, I should think, blindingly obvious. Much of the money that these CCGs need to distribute will be spent on purchasing GP services, so it puts GPs in charge of purchasing services from GPs. This is a notoriously tightly-knit community with strong common interests, and I find it hard to believe the money will be dispensed wisely. The second problem with this plan is that GPs may be good doctors, but that doesn’t make them good at resource-allocation decisions, and often doctors are the worst people to decide how to spend money wisely. A good health financing system should find ways to efficiently and equitably enable doctors to make good clinical decisions. It’s not obvious to me how putting doctors in charge of the financing decisions is congruent with this. A lot of commentators in the Guardian are decrying the role of major accounting services, who are being contracted by some of these CCGs to handle the decision-making process[2], but to me this seems like a good thing: the further you can move the pot of money from a GP the better, in my opinion. • It seems to rely on bulk contracts: The NHS to date seems to have structured a lot of its purchasing decisions on contracts that offer bulk funding – that is, a hospital or GP contracts to provide a service for a specified fee, but the service is very generalized and not broken down into its particulars. In the case of GPs, this usually means they are paid a fixed amount per year to provide services to patients on their list, but no detail is specified as to how they should provide services or what they should provide – not even a minimum basket of services. This is why many GPs in the UK operate single-handed surgeries with inconvenient opening times, have very little time for patients, and don’t provide services (like chest x-rays or chlamydia testing) that are taken for granted as routine in other countries. These contracts don’t encourage efficiency, and when these types of contracts are negotiated with large providers (like private healthcare organizations or big hospitals) it’s likely they will be highly beneficial to the provider unless the CCG negotiating the contract has a very adept team of lawyers. Unless the new Bill includes very strong support for this contract writing framework (and see my further point below) then I expect we will see profligate misuse of funds as the providers take these naive and poorly-supported CCGs to town. • The financing system is not obvious: It’s not clear to me how the CCGs are expected to decide what is the most economically effective (or, for that matter, clinically effective) service, what benchmarks will be established for comparing services that they are accepting tenders from, and how they are expected to make contracting decisions. This isn’t all the fault of the people writing the Bill: within the field of health services management, there is still much dispute about how best to assess the quality of care provided by a large service. For example, in-hospital mortality might be considered an important measure of quality, but how does one account for the mix of patients and the severity of their illness in comparing two hospitals? Should one trust the figures the hospital presents, and if not who is the central provider of assessment services to which a CCG should turn when attempting to compare hospitals on this measure? And if two hospitals have slightly differing mortality rates, how much extra should a provider be expected to value the difference at? Can a provider make a judgment to buy services from a cheaper hospital with higher age/sex-adjusted mortality rates, or is that decision unethical on its face? Have CCGs got any expertise on these issues, or received any guidance? It could be that the Health and Social Care Bill provides extensive information on this, and supposedly the reforms will include the establishment of a new organization to help CCGs with this task. But the reality is that no one really knows the answers to many of these questions, and it’s not clear that the structure for health financing proposed by the Health and Social Care Bill is going to be invulnerable to problems because of them. • It lacks centralized guidance and pricing structures: At the moment there is a single contract that all GPs sign with the NHS when they provide services. Will this contract be used by the CCGs? This contract basically gives a fixed pricing system for obtaining GP services – if it is used, how can the CCGs claim to be operating a competitive tendering system? If it is not used, and GPs are to negotiate their services on the basis of the price negotiations, how are the CCGs going to decide the correct price? We’re not talking about a fully free market here, since most CCGs operate to purchase services within a given area and, in general, patients won’t be going outside that area for services, so there won’t be multiple CCGs competing in the area, and patients won’t be voting with their feet if prices are too high (in fact the patients won’t even see the prices). If the system is going to operate without market signals, then it’s going to need some very carefully arranged pricing mechanisms to ensure it doesn’t waste money. These are not likely to be most optimally set at the level of individual CCGs, but would be better off set by the government. It’s not clear to me that this is going to happen, or that the CCGs are going to get much guidance at all on how to fix prices. By way of comparison, Japan operates a system in which private hospitals and clinics charge patients for services on a fee-for-service basis, then charge the government’s insurance system for 70% of the total cost. However, the government maintains a strict schedule of service fees, so it’s extremely difficult for doctors to over- or under-charge. Essentially in this market the government provides a very strong centralized pricing guideline to keep the market at a stable price. The Australian government uses a similar mechanism through setting a minimum fee for GP services provided and allowing patients to vote with their hip pocket when choosing GPs. It’s not clear that the NHS will be using any such system, but in the absence of a strictly market-based mechanism for setting prices[4], how are the CCGs going to be able to choose what to charge, or even to know that they are getting value for money? Given these concerns, I can see all sorts of disasters befalling the revised network of CCGs, including (for example) the possibility that they set up contracts that take up all their budget, only to find their area massively underserved with health services but no funds left to purchase more. What are they going to do then? How is the government determining how much money each CCG needs, and how can they be sure they have gotten it right? Does the government have any fallback position or plan B if over the first few years of the system the CCGs massively cock-up their purchasing decisions? On the face of it the reforms appear to consist of a poorly-structured semi-marketization, to be managed by inexperienced and new organizations that have been given arbitrary budgets in an environment with very limited centralized guidance, to purchase services in a marketplace where even the experts are uncertain about how to define value for money or quality of service provision, but with an extremely limited set of real market mechanisms as an alternative way of providing pricing signals. It’s like a healthcare Frankenmarket. How can this story possibly end well? I predict we’ll know before the next election, and I suspect that the results of this grand experiment are going to form the obituary for this government. fn1: well, theoretically you can – lots of governments have built health policy on “efficiency savings.” Practically, however, health systems improve by spending more to gain more, rather than spending less to gain the same. fn2: I provide no citation for this because I can’t be bothered looking, but really this shouldn’t require a citation, it’s the factual equivalent of saying “the sun rises in the morning.”[3] fn3: I know I know. fn4: I don’t maintain here that these systems are necessarily the best, simply that they are a strict guideline and they do roughly seem to work. It’s all Greek to you, isn’t it? I received a very interesting hospital dataset recently, in excel format and containing some basic variable names and values in Japanese. These included the sex of the patient, the specialty under which they were admitted to hospital, and all variable names. Initially this would be reasonably easy to convert to English in excel before import, but it would require making a pivot table and fiddling a bit (my excel-fu) is a bit rusty, but also I have address data and though at this stage it’s not important it may be in the future. So, at some point, I’m going to have to import this data in its Japanese form, so I figured I should work out how to do it. The problem is that a straight import of the data leads to garbled characters, completely illegible, and very little information appears to be available online about how to import Japanese-labeled data into Stata. A 2010 entry on the statalist suggests it is impossible: Unfortunately Stata does not support Unicode and does not support other multi-byte character sets, such as those necessary for Far Eastern Languages. If you are working with a data set in which all of the strings are in a language that can be represented by single byte characters (all European languages) just choose the appropriate output encoding. However, if your dataset contains strings in Far Eastern langages or multiple languages that use different character sets, you will simply not be able to properly represent all of the strings and will need to live with underscores in your data. This is more than a little unfortunate but it’s also not entirely correct: I know that my students with Japanese operating systems can import Stata data quite easily. So I figured there must be something basic going wrong with my computer that was stopping it from doing a simple import. In the spirit of sharing solutions to problems that I find with computers and stats software, here are some solutions to the problem of importing far Eastern languages for two different operating systems (Windows and Mac OS X), with a few warnings and potential bugs or problems I haven’t yet found a solution for. Case 1: Japanese language, Windows OS In this case there should be no challenge importing the data. I tried it on my student’s computer: you just import the data any old how, whether it’s in .csv or excel format. Then in your preferences, set the font for the data viewer and the results window to be any of the Japanese-language OS defaults: MS Mincho or Osaka, for example. This doesn’t work if you’re in an English language Windows, as far as I know, and it doesn’t work in Mac OS X (this I definitely know). In the latter case you are simply not able to choose the Japanese native fonts – Stata doesn’t use them. No matter what font you choose, the data will show up as gobbledigook. There is a solution for Mac OS X, however (see below). Case 2: English language, Windows OS This case is fiddly, but it has been solved and the solution can be found online through the helpful auspices of the igo, programming and economics blogger Shinobi. His or her solution only popped up when I did a search in Japanese, so I’m guessing that it isn’t readily available to the English language Stata community. I’m also guessing that Shinobi solved the problem on an English-language OS, since it’s not relevant on a Japanese-language OS. Shinobi’s blog post has an English translation at the bottom (very helpful) and extends the solution to Chinese characters. The details are on Shinobi’s blog but basically what you do is check your .csv file to see how it is encoded, then use a very nifty piece of software called iconv to translate the .csv file from its current encoding to one that can be read by Stata: in the example Shinobi gives (for Chinese) it is GB1030 encoding, but I think for Japanese Stata can read Shift-JIS (I found this explained somewhere online a few days ago but have lost the link). Encoding is one of those weird things that most people who use computers (me included!) have never had to pay attention to, but it’s important in this case. Basically there are different ways to assign underlying values to far Eastern languages (this is the encoding) and although excel and most text editors recognize many, Stata only recognizes one. So if you have a .csv file that is a basic export from, say, excel, it’s likely in an encoding that Stata doesn’t recognize on an English-language OS. So just change the encoding of the file, and then Stata should recognize it. Working out what encoding your .csv file is currently in can be fiddly, but basically if you open it in a text editor you should be able to access the preferences of the editor and find out what the encoding is; then you can use iconv to convert to a new one (see the commands for iconv in Shinobi’s blog). Unfortunately this doesn’t work on Mac OS X: I know this, because I tried extensively. Mac OS X has iconv built in, so you can just open a terminal and run it. BUT, no matter how you change the encoding, Stata won’t read the resulting text file. You can easily interpret Shinobi’s solution for use on Mac but it won’t work. This may be because the native encoding of .csv files on Mac is unclear to the iconv software (there is a default “Mac” encoding that is hyper dodgy). However, given the simplicity of the solution I found for Mac (below), it seems more likely that the problem is something deep inside the way Stata and the OS interact. Case 3: English-language, Mac OS X This is, of course, something of a false case: there is no such thing as a single-language Mac OS X. Realizing this, and seeing that the task was trivial on a Japanese-language Windows but really fiddly on an English-language windows, it occurred to me to just change the language of my OS (one of the reasons I use Apple is that I can do this). So, I used the language preferences to change the OS language to Japanese, and then imported the .csv file. Result? Stata could instantly read the Japanese. Then I just switched my OS back to English when I was done with Stata. This is a tiny bit fiddly in the sense that whenever you want to work on this file you have to switch OS languages, but doing so on Apple is really trivial – maybe 3 or 4 clicks. When you do this though, if you aren’t actually able to read Japanese, you’ll be stuffed trying to get back. So, before you do this, make sure you change your system settings so that the language options are visible on the task bar (you will see a little flag corresponding to your default locale appear next to the date and time). Then, make sure you know the sequence of clicks to get back to the regional language settings (it’s the bottom option of the language options menu in your taskbar, then the left-most tab inside that setting). That way you can change back easily. Note also that you don’t, strictly speaking, have to change the actual characters on the screen into Japanese! This is because when you select to change your default OS language, a little window pops up saying that the change will apply to the OS next time you log in but will apply to individual programs next time you open them. So you can probably change the OS, open Stata, fiddle about, close Stata, then change the OS back to English, and so long as you don’t log out/restart, you should never see a single Japanese-language menu! Weird, and kind of trivial solution! A final weird excel problem Having used this trick in Mac OS X, I thought to try importing the data from its original excel format, rather than from the intermediate .csv file. To my surprise, this didn’t work! In programming terms, running insheet to import .csv files translates the Japanese perfectly, but running import to import the excel file fails to translate properly! So, either there is something inaccessible about excel’s encoding, or the import program is broken in Stata. I don’t know which, but this does mean that if you receive a Japanese-language excel file and you’re using Mac OS X, you will need to export to .csv before you import to Stata. This is no big deal: before Stata 12, there was no direct excel import method for Stata. A few final gripes As a final aside, I take this as a sign that Stata need to really improve their support for Asian languages, and they also need to improve the way they handle excel. Given excel’s importance in the modern workplace, I think it would be a very good idea if Microsoft did more to make it fully open to other developers. It’s the default data transfer mechanism for people who are unfamiliar with databases and statistical software and it is absolutely essential that statisticians be able to work with it, whatever their opinions of its particular foibles or of the ethics of Microsoft. It also has better advanced programming and data manipulation properties than, say, OpenOffice, and this makes it all the more important that it match closely to standards that can be used across platforms. Excel has become a ubiquitous workplace tool, the numerical equivalent of a staple, and just as any company’s staplers can work with any other company’s staples if the standards match, so excel needs to be recognized as a public good, and made more open to developers at other companies. If that were the case I don’t think Stata would be struggling with Asian-language excel files but dealing fine with Asian-language .csv files. And finally, I think this may also mean that both Apple and Microsoft need to drop their proprietary encoding systems and use an agreed, open standard. And also that Windows need to grow up and offer support for multiple languages on all their versions of Windows, not just the most expensive one. Lastly, I hope this post helps someone out there with a Japanese-language import (or offers a way to import any other language that has a more extensive encoding than English). In October my master’s student had her work on modeling HIV interventions in China published in the journal AIDS, with me as second author. You can read the abstract at the journal website, but sadly the article is pay-walled so its full joys are not available to the casual reader. This article is a sophisticated and complex mathematical model of HIV, which incorporates three disease stages, testing and treatment separately. It is based on a model published by Long et al in the Annals of Internal Medicine in 2010, but builds on this model by including the effects of methadone maintenance treatment, and doesn’t include an injecting drug use quality of life weight. It also adds new risk groups to the model: Long et al considered only men who have sex with men (MSM), injecting drug users (IDU) and the general population, but we added commercial sex workers (CSW) and their clients, who we refer to as “high risk men.” Thus our mathematical model can consider the role of both injecting drug users and sex workers as bridging populations between high-risk groups and the general population, an important consideration in China. The HIV epidemic in China is currently a concentrated epidemic, primarily among IDUs in five provinces, and amongst MSM. The danger of concentrated epidemics is that they give the disease a foothold in a country, and a poor or delayed response may cause the epidemic to jump to the rest of the population – there is some suggestion this may have happened in Russia, for example. The Chinese authorities, recognizing this risk, began expanding methadone maintenance treatment (MMT) in the early 2000s, but it still only covers 5% of the estimated 2,500,000 IDUs in China. Our goal in this paper was to compare the effectiveness of three key interventions to prevent the spread of this disease: expanded voluntary counseling and testing (VCT); expanded antiretroviral treatment (ART); and expanded harm reduction (MMT and needle/syringe programs); and combinations of these interventions. VCT was assumed to reduce risk behavior and expand the pool of individuals who can enter treatment per year; ART was assumed to reduce infectiousness; and harm reduction to reduce risk behavior. Costs were assigned to all of the programs based on available Chinese data, and different scenarios considered (such as testing everyone once a year, or high-risk groups more frequently than everyone else). The results showed that all the interventions considered are cost-effective relative to doing nothing; that some of the interventions saved more money than they cost; and that the most cost-effective intervention was expanding access to ART. Harm reduction was very close to ART in cost-effectiveness, and would probably be more cost-effective if we incorporated its non-HIV-related effects (reduced mortality and crime). The Chinese government stands to reap a long-term benefit from implementing some of these programs now, through the 3.4 million HIV cases averted if the interventions are successful (there are a lot of “ifs” in that sentence).This is the first paper I’m aware of that compares ART and harm reduction head on for cost-effectiveness, though subsequently some Australians showed in the same journal that needle/syringe programs (NSP) in Australia are highly cost-effective as an anti-HIV intervention. This is also the most comprehensive model of HIV in China to date, and the first to conduct cost-effectiveness analysis in that setting. I think it might be the first paper to consider the detailed structure of risk groups in a concentrated epidemic, as well. There are obvious limitations to the conclusions that one can draw from a mathematical model, and some additional limitations on this model that are specific to China: the data on costs was a bit weak (especially for MMT) and of course there are questions about how feasible some of the interventions would be. We also didn’t consider restricting the interventions to the key affected provinces, which would have made them much cheaper, and we didn’t consider ART or VCT interventions targeted only at the high-risk groups, which would also have been cheaper. For example, legalizing sex work and setting strict licensing laws might enable universal, quarterly HIV testing and lead to the eradication of HIV from this group within 10 years, but we didn’t include this scenario in the model because a) legalization is not going to happen, b) enforcement of licensing laws is highly unlikely to be effective in the current context in China, and c) data on the size and behavior of the CSW population is probably the weakest part of our model, so findings would be unreliable. Despite the general and specific limitations of this kind of modeling in this setting, I think the results are a strong starting point for informing China’s HIV policy. China seems to have a very practical approach towards this kind of issue, so I expect that we’ll see these kinds of policies implemented in the near future. My next goal is to explore the mathematical dynamics of these kinds of models with the aim of answering some of the controversial questions about whether behavioral change is a necessary or effective part of a modern HIV response, and the exact conditions under which we can hope to eliminate or eradicate HIV. Things are looking very hopeful for the future of HIV, i.e. it’s going to be eliminated or contained in most countries within our lifetime even without development of a vaccine, and that’s excellent, but there is still debate about how fast that will happen and the most cost-effective ways of getting there: hopefully the dynamic properties of these models can give some insight into that debate. This article is a big professional achievement for me in another way. It’s extremely rare for master’s students to publish in a journal as prestigious as AIDS (impact factor over 6!), and my student’s achievement is a reflection of her amazing talent at both mathematics and English, and a year of intense work on her part, but I like to think it also is a reflection of my abilities as a supervisor. There were lots of points where we could have let things slide on the assumption that master’s students don’t publish in AIDS; but we didn’t, and she did. I like to think the final product reflects well on both of us, so read it if you get the chance! Today’s issue of PLOS Medicine contains an interesting debate between Australia’s own anti-smoking paladin, Simon Chapman, and a professor Jeff Collin from Scotland, over whether governments should introduce a license for smokers. Chapman puts the case for a license, while Collin opposes it, and the debate is refreshingly free of jargon or paywalls, so quite accessible to non-public health types. I think the license is an interesting idea: basically, anyone who wants to smoke would be required to pay a fee to obtain a license, and no one without a license can purchase cigarettes. Licenses would be available for various quantities of cigarettes, and by registering the licenses with a fixed central database it would be possible to ensure that people could only consume within the licensed amount. Those who want to give up smoking could turn in their license and get a refund on all the years’ fees they’ve paid, plus interest. Meanwhile, the government would be able to accurately track smoking use statistics, which is very useful from a public health perspective. Chapman also suggests that, just like a driver’s license, one should be required to pass a test to get the license, thus in his words ensuring that new smokers were making an informed choice, something the tobacco industry has long declared that it believes applies to smokers’ decisions and guaranteeing that people who take up smoking have been required to inform themselves of its risks and of the difficulty in giving up. Chapman’s article also offers arguments to dismiss claims that a license would be intrusive, discriminate against the poor, or stigmatize smokers, and proposes a gradual lifting of the minimum age for acquiring the license in order to make numbers of new smokers less and less common. He compares the license with a license to drive or own a gun and, quite interestingly, with a prescription to take pharmaceuticals, which he represents as a kind of temporary license. On its own lights, it is quite a strong argument. The opposing case by Collin takes a more structural, less drug-user-focused approach to the challenge of reducing smoking rates. He argues that we should continue to focus on regulating the pharmaceutical companies to combat what he calls an “industrial epidemic,” and says we should strengthen measures which should centre on changing a system of manufacture and promotion of such harmful products centred on the corporation, an institution that is staggeringly ill-suited to such roles when viewed from a public health perspective He suggests that further measures targeting users are both discriminatory and stigmatizing, and that increasing attempts to manipulate prices and cost barriers will punish existing poor smokers the most (and smoking, at least in developed nations, is a much bigger problem amongst the poor). This is a point that Chapman had disputed, but Chapman’s argument against it is at least partly based on dismissing these complaints as crocodile tears from the tobacco industry and its front organizations – of which I sincerely doubt Collin is a member. Collin argues, furthermore, that the idea of a tobacco smoker’s license is fundamentally illiberal, and grounds most extant bans of tobacco users‘ behavior in a liberal philosophical framework: Smoke-free policies have been recognised and understood as unambiguously liberal measures rather than authoritarian intrusions on personal freedom. In advancing a case focused on the protection of non-smokers, workers, and children, such legislation conforms to JS Mill’s classic formulation of the harm principle in On Liberty: “(t)he only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others” His argument, then, is that we should avoid anti-tobacco legislation that targets the users themselves, except to prevent harm to others, and focus instead on the source of the harm (the corporations). He even suggests that the imposition of licenses would represent a propaganda “gift” to the industry, and further punish poor people who smoke relative to the wealthier. Overall I think Collin’s arguments are less coherent and consistent, but I am inclined towards his position on the issue. I think the license would probably be a good idea from a public health perspective, but represents a curtailment of individual liberty that is unnecessary. It doesn’t actually have any serious civil liberties implications – registering smokers is not the beginning of the police state – but it does shift the focus of efforts away from the source of the harm to its most immediate victims, and it does play a stigmatizing role. Collin also observes that the major goals of the Framework Convention on Tobacco Control (FCTC) are institutional, and in many countries have not been achieved, and it is better to work on systems for improving countries’ ability to meet those goals than to divert our efforts towards restricting users’ behavior. I agree with him on this point: many countries are a long way from a proper implementation of the basic goals of the FCTC – higher tobacco taxes, curbs on illicit tobacco, and indoor smoking restrictions, for example – and strengthening those countries’ ability to resist tobacco company money and marketing is a much better goal for anti-smoking activists. The reality is that smoking in the developed world is on the decline and will continue to do so, and as a result the tobacco companies are aggressively targeting developing nations. It is in those developing nations that activists should be fighting a battle for improved governance and institutional structures that will help those countries protect their health systems from this “industrial epidemic.” The debate raises a related issue for me, which is: have some countries gone far enough in their anti-tobacco measures? Australia, for example, having now passed plain packaging laws, has pretty much made smoking as unattractive and difficult as it can do without actually banning it. Should we stop there? The reason this is an issue for me is that I play a violent sport, and I recognize that violent sports represent a deliberate choice by people to take risks with their health in pursuit of a certain pleasure. So does drinking to get drunk, and so does casual sex, both activities of which I approve. At some point we have to recognize that people have the right to trade health for fun, and although that doesn’t give people carte blanche to, for example, go surfing in a frankenstorm or dance naked in front of lions, it does mean that at some point we have to draw a line beyond which public health measures must stop. From a public health perspective, so long as anyone is smoking, “more needs to be done.” But from a civil liberties perspective, at some point the barriers to smoking and anti-smoking education are such that we can safely say people who take up the habit know the risks and are suitably reminded of them that there is no reason to further intrude on their personal decisions. Have some developed nations reached that point? For Australia at least I’m not sure there is much more that can be done except to introduce a license, or introduce the rolling bans mentioned in Chapman’s article. Do we need to go that far, or is the current status quo sufficient? Should the anti-tobacco lobby in Australia be relaxing their national attention simply to being vigilant against new tobacco industry efforts, and instead begin focusing more of their energy on the other countries in the West Pacific where smoking remains a serious and growing problem? There comes a point where you have to accept that the activity harms no one else, the person engaged in it is willing and aware of the risks, and the activity is suitably challenged in everyday life that they must be committed and really want to do it. At that point, perhaps public health organizations need to step back, and instead of further restricting the behavior, defend the right of those engaged in it to do so, and to get healthcare for the problems it causes. This is what we do now for mountain-climbing and rugby, two very dangerous but well-respected activities. I think it is possible that in some developed nations, smoking has reached that point, and maybe in those countries enough has been done. Congratulations America! With the American electorate[1] having given a resounding endorsement[2] of the policies of the Revolutionary Islamic Socialist Party of Kenya, America will finally see a form of healthcare financing reform. Depending on who you read, this reform seems to be either an insane policy that will bankrupt America, or not much change. I think I speak in concert with 314,731,000 Americans when I declare that I’m no expert on American healthcare – let’s face it, a system that complex is hardly going to be comprehensible to mere mortals – but from my position of limited knowledge I’m inclined towards the latter view. But in health financing, not much change can mean a lot to the minority of the population who are most vulnerable to healthcare-related financial catastrophe, and so not much change is probably, in this case, a Very Good Thing. Just how good will become more apparent over the next few years, and I’m guessing that for health system researchers around the world Obama’s election victory is a huge boon, because it means they can watch what is pretty much the only largely private health financing system in the developed world being reformed from a radically different perspective to the standard vision of universal health cover. Although reading conservative commentators one gets the impression that Obamacare is a massive socialist-fascist system of monolithic oppression, in reality it appears to be an attempt to impose careful, minimalist regulation on the system, to ensure that it maintains its character of essentially privatized healthcare insurers, but regulates it to improve efficiency and reduce inequality. The efficiency improvements are intended to reduce long-term growth in costs, and the inequality improvements to ensure that everyone gets coverage of some kind, regardless of ability to pay or pre-existing conditions. These latter improvements are intended to eliminate the problem of the uninsured without disrupting the essentially private nature of the marketplace for health insurance. Whether this will work or not is a big gamble, but in the long term it could have huge benefits economically and socially for ordinary Americans. I’m struck by the extent to which the problem of healthcare-related financial catastrophe is researched in developing countries but left largely undescribed in the USA. I’m also struck by the ease with which developing nations like Indonesia, the Philippines, Thailand and other places have been able to introduce innovative financing schemes, while the USA has languished. So I thought while I’m taking a break from a busy work schedule, that I would consider an alternative to Obamacare based on a careful restructuring of the entire US insurance market, using the existing Medicare system as a base. I lack any in-depth knowledge about the American system, and so this post is entirely speculative, but it gives an opportunity to think about ways of gradually moving from a private to a public system, using primarily market means, and allowing the users of the system to determine the final mix of private and public insurers through their consumption decisions. Once again, it’s entirely and completely speculative, being done purely for fun, and comments demolishing it on all its particulars are welcomed, nay, encouraged. First, though, a word about the flaws in the current Medicare system. Does Medicare work? The New England Journal of Medicine (NEJM) has been running a series of opinion pieces (and some research) on health policy reform for a while now, and on the week of Obama’s reelection it published a fascinating article describing the failings of Medicare. The key message of this article is that Medicare fails as both an insurance package and as a cost containment mechanism. I was shocked to discover that Medicare does not include a cap on costs, so although it is an insurance package it doesn’t stop beneficiaries’ out of pocket expenses from destroying their budget. Compare this with, for example, Japan’s universal insurance scheme, implemented in 1961, has a cap on personal expenses and has been responsible for restraining costs to below the OECD average of 9.6% (according to wikipedia[3]). Granted, other universal health coverage schemes are universal, so they have better risk sharing (Medicare is for the elderly), but still … the USA is the richest country in the world, you’d think sorting this out wouldn’t be soooo hard. According to the NEJM article, in 2009 15% of Medicare recipients faced payment of 5000$US or more, when the maximum(?) income for pensioners in the USA is something like $15,000. In studies of financial catastrophe in developing nations, this sort of statistic is considered disastrous, though it should be noted that the stats in the article aren’t sufficient to identify rates of financial catastrophe[4]. The article then notes that because of the lack of a cap, Medicare recipients often pay for secondary insurance to pay the out-of-pocket expenses. This has the dual effect of increasing their insurance costs and, if they choose a good insurance package, encouraging unnecessary use of medical care, since a good secondary insurance package enables free healthcare usage and thus increases costs. The article also references a paper suggesting that half of America’s increase in healthcare costs in the last 40 years can be slated home to the growth of private health insurance (I haven’t read this reference and have no idea how good it is). The article’s recommendation is that the government should put a cap on medicare costs while simultaneously restricting the ability of insurance companies to cover Medicare’s out of pocket costs, and references many other reports that have suggested the same thing. On the basis of that report, Medicare hardly seems to be a good starting point for health insurance reform, does it? An alternative vision for Obamacare: extending Medicare Given Obama’s approach to healthcare reform, it seems that a fundamental assumption of any alternative vision is that it should not radically alter current market structures. Obamacare appears to be, fundamentally, a suite of regulatory changes to the current marketplace. He hasn’t suggested, for example, nationalizing all existing insurers to form a single-payer government-run monolith. So, any alternative vision for Obamacare that is going to be consistent with Obama’s obvious preference for creeping incrementalism is going to need to use existing systems to achieve its goals. How can we do this? Let’s try building on Medicare. The first step of the Faustian plan would be to put a cap on expenses under Medicare – looking at the tables in the NEJM, about$1500 seems like a good limit. Then, to achieve a gradualist change in the American healthcare system, Faustuscare would consist of a simple decision to allow anyone to enrol in Medicare. In Japan the cost of the single-payer insurance system varies by state, so Obama could implement a similar system: anyone can join Medicare, based on paying a rate that varies according to the population and its distribution in their state. This would make Faustuscare cheap in the most populous and youngest states (just as it is in Japan). The one condition on Medicare would be that it can’t ban people from joining on the basis of pre-existing conditions, and has no age-dependent pricing structure… or, if you want to be really brutal, the price a member pays is fixed by the age at which they join, not their current age.

The idea, of course, is to use the power of the government to tax rich idlers like Mitt Romney. Obama fixes the cost of joining Medicare at less than that of the popular big medical plans, and makes up the shortfall from general taxation. It’s almost certain that making Medicare available to people under 65 – even those with pre-existing conditions – is going to reduce overall risk, so he can afford to lower prices. Then, he offers companies a further concession – they can move employees to the new system at some reduced rate, provided that they cut half of the difference with their employees. With such a condition he is going to recruit lots of new members quickly, and everyone who gets recruited is going to essentially get a pay rise.

The plan here is obvious – use the power of general taxation to supplement a reasonably priced health insurance plan, with no health-related joining conditions, to undercut existing insurance companies. The new entrant to the insurance market already has everyone over 65 as a customer, and by introducing the (equality-improving) cap on payments, has caused a lot of those seniors to ditch their existing supplemental insurance. In order to compete with this new market entrant the existing companies are going to have to find a way to drop prices and do away with pre-existing-illness conditions. The result of this will be a massive, across-the-board efficiency gain. The likely survivors of the government’s entry to the market will be the HMOs, which are already ruthlessly efficient, comparatively cheap, and already offer reasonably good health outcomes. Obama can choose to restrain Medicare’s power to ensure that some insurers survive in a mixed market, or he can use the power of general taxation to force them all out of business, nationalizing them one by one as they fold. I would recommend the former, since the American health market is obviously built on competition between both providers and commissioners. Keeping Medicare in the market as the insurer of last resort will ensure that the other insurers lower their prices and/or offer a basic package that is competitive with Medicare, but they will still offer “bonus” packages that appeal to the rich or the health-obssessed.

I have a suspicion that much of this plan could be achieved through administrative rather than legislative changes. It can be sold as a partially free market solution to the health insurance problem, and I suspect a lot of big companies would jump on the chance to shift their insurance payments to such a system. I think the American system needs two forms of competition: competition at the bottom of the market, and plans that don’t discriminate on pre-existing conditions. Any such plan needs to be able to recruit low-risk people to balance its risk profile, and (probably) additionally need some form of subsidy. Medicare is the obvious vehicle, since it already exists, and offering it at reasonable cost to young people could potentially rapidly expand its coverage. Since it already is huge, further expansion of coverage would give it additional power to negotiate cost-cutting with providers – which would force other insurers to do the same.

America’s problem in reforming its health system gradually (rather than the crash-through or crash approach of the original NHS) is to find a way to manipulate free markets to be equitable. Obama appears to be taking the road of regulation, but the alternative is nationalisation by stealth, and Medicare offers the vehicle by which to do this. What do you think?

fn1: Well, six swing states anyway

fn2: When results are measured to at least two decimal places

fn3: I really should be able to do better than this

fn4: I’ve not done a literature search but I have a strong suspicion that healthcare-related financial catastrophe – a very real phenomenon in the modern USA – is better-understood in developing nations than it is in the USA. What does this have to say about health services researchers attitudes towards the world?

Christianity’s fundamental promise is of eternal life, and the risk of refusing to accept God’s grace is generally accepted to be eternal damnation. While the truth of these statements is still subject to debate, there is little empirical evidence of the benefit of eternal life, and little research exploring the possible drawbacks of a decision to forego evil in exchange for the promise of eternal salvation. In a world of finite resources, decisions about how best to dispose of available resources while alive need to take into account the long-term and (if certain cosmological properties are shown to hold) potentially eternal consequences of the choice between good and evil. In this blog post, we will examine the costs and benefits of baptism and rejection of sin from an econometric standpoint. Of specific interest in this blog post is the relationship between the benefits of accepting God’s grace and the discount rate society applies to years of life not yet lived.

The immediate use of an analysis of the costs and benefits of accepting god’s grace is obvious, but from a wider perspective a clear understanding of the economic aspects of this theological decision may help us to understand the persistence of evil in a world where humans have free will, and to answer the eternal question: why does evil exist in a world shaped according to God’s will?

Methods

Standard cost-effectiveness analysis methods were applied to two simple decision problems. The first decision problem is the question of whether or not to baptize a child, on the assumption that baptism grants the child God’s grace, causing them to live a holy life but to lose the benefits that might accrue to an evil-doer. The analysis was then extended to consider a problem implicit in a great deal of modern rhetoric about the soul and sexuality, viz: if homosexuality is a choice, and that choice leads only to hell, is it cost-effective to choose to be homosexual? This question was answered in terms of numbers of partners foregone, and quality-adjusted life years gained from the sacrifice.

The basic decision problem: whether to baptize

The basic decision problem was addressed using standard measures of effectiveness. It was assumed that were a child to be baptized they would be eligible to enter heaven upon their death, and would thus be able to live forever. Were they not to be baptized, they are assumed to enter hell at death. Each year of life lived was assumed to grant the individual a full quality adjusted life year (QALY); each year in heaven (from now until the rapture, i.e. infinite years from now) was also assumed to grant 1 QALY; while entry into hell was considered to grant 0 QALYs. All QALYs were discounted using the standard formula, and the effect of the discounting rate on the benefits of each decision were calculated over three different life expectancies: 45 years (enlightenment-era), 70 years (biblical lifespan) and 80 years (the life expectancy granted by modern materialist living). Effectiveness was then assessed for a wide range of discount rates, varying from 0.5% to 5%. The difference in QALYs gained (the incremental effect) was then calculated for all these scenarios.

Cost-effectiveness calculation for the baptism problem

Having calculated the incremental effect of baptism, the cost was then calculated under the assumption that evil people make more money. This assumption is implicit in, for example, Mark 8:36, when Jesus asks

What good is it for a man to gain the whole world, yet forfeit his soul?

which suggests that doing good requires some form of material sacrifice. This is, of course, also obvious in the early doctrine of the Dominican and Franciscan orders, and much of pre-enlightenment religious debate was focused around this struggle between material goods and goodness.

This contrast was modeled by a variable $\alpha$, which represents the percentage of additional annual income an unbaptized sinner earns relative to a person living in grace. For example, if a sinner earns 10% more than a convert, then $\alpha=0.1$. Then, assuming a fixed average income for god-fearing individuals, we can calculate the lost income due to being good. This is the incremental cost of salvation. From this calculated incremental cost and the incremental benefit, we can estimate an incremental cost effectiveness ratio (ICER), and estimate whether the decision to baptize is cost-effective.

In keeping with standard practice as used by, for example, the National Institute for Health and Clinical Excellence, we set the basic income of one of the saved to be the mean income of the UK, and define baptism as “cost-effective” if its ICER falls below a threshold of three times the annual mean income of the UK. We also establish a formula for the cost-effectiveness of salvation, based on the relative difference in income between the good and the evil, the discount rate, and the human lifespan.

All income in future years was discounted in the same way as future QALYs.

The costs and benefits of voluntary homosexuality

Finally, we address a problem implicit in some forms of modern christian rhetoric, that of the wilful homosexual. Many religious theorists seem to think (either implicitly or openly) that homosexuality is a choice. If so, then the choice can be modeled in terms of an exchange of sexual partners for eternal damnation. In this analysis, we calculated the number of sexual partners a potentially homosexual male will forego over a 20 year sexual career commencing at age 15. We assumed that all life years before age 15 are irrelevant to the calculation (that is, we assumed that all individuals make a choice at age 15 as to whether to be good or evil), and that a person foregoing homosexuality will have 0 partners. Other assumptions are the same as those made above. The ICER for being good was then calculated as the cost in foregone sexual partners (discounted over a wide range of rates) divided by the QALYs gained through foregoing this lifestyle and gaining access to heaven.

Faustian discount rates and the problem of heavenly utilities

Commonly used discount rates range from 3 to 5%, but these are potentially inconsistent with the discount rates preferred by evil-doers. In this study we did not model differential discount rates between evil-doers and the elect, but we did consider one special case: that in which everyone observes a discount rate equal to that observed by Dr. Faust. As is well known, Dr. Faust sold his soul to Mephistopheles in exchange for earthly power, and after 24 years his soul was taken into hell. Since he knew the time frame at the beginning of the deal, this implies that he was following a discount rate sufficient to rate all time more than 24 years in the future at 0 value. Under standard discounting practice such a rate does not exist, but we can approximate it by the rate necessary to value all time more than 24 years in the future at no more than 5% of current value. This discount rate, which we refer to as the Faustian Discount Rate, is approximately 12.5%. All scenarios were also tested under this discount rate.

A further problem is the problem of calculating utility weights for a year spent in heaven or hell. Given the lack of empirical data on utility of a year in heaven, and the paucity of first hand accounts, we assumed that a year in heaven was equivalent to a year without pain or suffering of any kind, i.e. one full QALY. According to the site What Christians Want to Know, Revelations 4:8 describes heaven as

a constant chant of holy angels that are continually proclaiming Holy, Holy, Holy over the throne of God.  The Mercy Seat in heaven where God sits is surrounded by magnificent angels full of glory and power that proclaim and bless the holy name of God without ceasing.  Some of these are described as beasts, full of eyes, with six wings and neither rest day or night in their proclaiming the holiness of God.

For those of us who don’t enjoy doom metal, this would probably have a utility value of less than one. In the interests of a conservative analysis, we assign heaven a utility of 1.

A similar problem applies to assigning utilities for hell. Many people claim to have been to hell and back, but their accounts of their time at a Celine Dion concert are not convincing and it is unlikely that accurate data on the state of hell exists. Popular conception of hell suggests a realm of eternal torture, but it is worth noting that in standard burden of disease studies even the most unpleasant and torturous diseases – such as end states of cancer, AIDS, and severe disability – are assigned positive utility weights, often quite a lot higher than 0. It is therefore reasonable to suppose that hell should be assigned a positive but small utility. However, again in the interests of conservative analysis, we assign a utility weight of 0 to a year spent in hell – that is, it is equivalent to death.

Results

Incremental benefit of salvation

The formula for the incremental benefit of salvation can be derived as

$LY_{g}=\frac{\exp(-rl)}{r}$

where here,

• $LY_{g}$ is the incremental benefit of being good, in QALYs
• r is the discount rate
• l is the human life expectancy

Figure 1 charts this incremental benefit over a wide range of discount rates for three different life expectancies.

Figure 1: Incremental benefit of salvation for three different life expectancies

It is clear that as the discount rate increases the incremental benefit of salvation decreases rapidly. At the Faustian Discount Rate, the incremental benefit of salvation is a mere 0.03 QALYs for a 45 year life expectancy, or 0.0004 for a human with an 80 year life expectancy. That is, even if Faustus had been offered and then rejected his bargain at birth, and expected to live to 45 years only, he would have seen the benefit to himself as being only about 0.03 years of life, due to his tendency to discount the value of years far in the future.

The cost-effectiveness of baptism

We now consider the cost-effectiveness of baptism. Let the income of one of the saved be given by $c_{g}$, and that of an evil-doer be $c_{e}=(1+\alpha)c_{g}$. Then the income foregone in order to enter heaven is given by the formula

$C=\alpha c_{g}(\frac{1-\exp(-rl)}{r})$

where all parameters are defined as before. Then the incremental cost effectiveness ratio (incremental cost divided by incremental benefit) is

$ICER=\alpha c_{g}(\exp(rl)-1)$

The ICER is plotted in figure 2 for two common life expectancies across a range of values of the discount rate, assuming a mean annual income of 26,000 pounds and that evil-doers earn 10% more income than the saved.

Figure 2: Incremental cost-effectiveness of salvation for two different life expectancies

At a Faustian Discount Rate, life expectancy of 70 years, and 26,000 pound mean income, the ICER for baptism is 16,202,218 pounds per QALY gained.

We can estimate a general condition on society’s discount rate for baptism to be cost-effective, in terms of the additional income gained by being evil and the life expectancy. This formula is given by:

$r \le \frac{1}{l}ln\Bigl (\frac{3+\alpha}{\alpha}\Bigr)$

For a life expectancy of 70 years, assuming that the damned earn 10% more than the saved, the required discount rate for baptism to be cost-effective is 4.3% or less; if the damned earn 20% more this threshold drops to 3.5%. It is clear that damnation doesn’t have to be much more materially rewarding before it becomes attractive even under quite reasonable discount rates.

The costs and benefits of voluntary homosexuality

We now consider the situation of a callow 15 year old youth, considering embarking on a life of sodomite sin. What should he choose? Obviously, from the perspective of a simple youth, the costs need to be weighed up in terms of foregone lovers. Assuming an average of five sexual partners a year, a sexual career beginning at age 15 (which is set to time 0 in this analysis) and lasting 20 years, and the same conditions on discount rates, eternal damnation, etc. as described above, a simple formula for the number of partners this man would be foregoing by refusing to choose the love that dare not speak its name can be derived as

$p=\frac{5}{r}(1-exp(-20r))$

and from this the incremental cost effectiveness ratio (measured in partners foregone per QALY gained) as

$ICER=5\Bigl(\frac{1-exp(-20r)}{1-exp((15-l)r)}\Bigr)$

Note that this ICER is not dependent on the human lifespan. It is in fact almost linear in the discount rate (Figure 3). At the Faustian Discount Rate, the potential gay man is looking at a cost of 4.6 lovers foregone for every QALY gained. Note these values change for different annual average numbers of lovers.

Figure 3: Incremental cost-effectiveness of foregoing a life of sodomy

It might be possible to construct an experiment that assessed individuals’ discount rates using this formula: their answers to the question “how many years of life would you give up to win an additional 5 lovers” could be used to identify their value of r.

Conclusion

In Mark 8:36, Jesus asks the rhetorical question

What good is it for a man to gain the whole world, yet forfeit his soul?

Although usually presented as a question with no clear answer, it is actually quite easy to investigate this question empirically, and to draw conclusions about its implied cost-effectiveness analysis. The results presented here show that, in general, the good gained by forfeiting one’s soul is quite great, and the decision to forego baptism and live a life of evil (including wilful homosexuality) is generally the best decision one would expect a rational actor to make. At very low life expectancies and unrealistically low discount rates it is more beneficial to forego evil and embrace salvation, but at the discount rates usually used by economists, and assumed to reflect rational decisions made by ordinary individuals, salvation is not a profitable course of action.

These findings have interesting theological implications. First, we note that the Church is most likely to gain converts in a society which has a very low discount rate – but in general, the societies where the Church first took hold were societies with high rates of infant mortality and all-cause mortality, which were likely to put a low value on the later years of life – that is, to have high discount rates. But such societies are not naturally sympathetic to the message of eternal damnation, unless they can be convinced to forego rationality in moral decision making. This might explain the Church’s historical resistance to scientific endeavour, and willingness to foment superstitious practices.

These findings also explain christianity’s historical opposition to usury. It is naturally the case that buying something today and paying for it later – i.e. borrowing – is inconsistent with a very low discount rate, which tends to value future years of lost income almost as much as now. Furthermore, usurers operating in the open market will set interest rates well above 0.05%, and it is likely that the practice of usury plus the publishing of interest rates will encourage a society with higher discount rates (in fact, it is likely that this would be encouraged by the lending class). This directly undermines the church’s lesson of salvation, which depends on very low discount rates to work.

Finally, low discount rates are often associated with environmentalism – care for future generations, priority setting that considers costs in the distant future, etc. – but on the central issue of our time (global warming) many of the born again religious organizations that most fervently preach the message of salvation also vehemently oppose any message of custodianship and environmental care. These organizations would probably make better progress in convincing people to give up the joys of the here-and-now for an indeterminate heaven (that seems to involve a lot of noise pollution) if they could find a theoretically consistent approach to discount rates.

This post has shown a simple explanation for the problem of evil: most people operate with discount rates closer to Dr. Faust than to St. Christopher, and as a result they are unlikely to accept the distant benefits of heaven over the joys of the material world. Until the church can find a way to convince us that all our tomorrows are as important as today, the problem of evil will never be solved.

One possible consequence of the collapse of the summer arctic ice cover is that storms like Sandy will become the new normal. There are reasons to think that the freak conditions that caused Sandy to become so destructive are related to the loss of arctic ice, and although the scientific understanding of the relationship between the arctic and northern hemisphere weather in general is not robust, there seems to be at least some confidence that the ice and weather around the Atlantic are related.

It’s worth noting that what is happening in the arctic this year is well in advance of scientific expectations. The 2007 Intergovernmental Panel on Climate Change (IPCC) report, for example, predicted an ice free arctic in about the year 2100. The cryosphere blogs, however, are running bets on about 2015 for “essentially ice free,” and no ice in 2020, as shown, for example, in this excellent post on ice cover prediction by Neven. Results presented by the IPCC are one of the main mechanisms by which governments make plans to manage climate change – in fact this was their intention – and one would think that events happening 80 years sooner than the IPCC predicts would make a big difference to the plans that governments need to consider.

One of the biggest efforts to make policy judgments based on current predictions of future effects of climate change was the Stern Review, published in 2006 and based on the best available scientific predictions in the previous couple of years. The key goal of the Stern Review was to assess the costs and benefits of different strategies for dealing with climate change, to answer the question of whether and when it was best to begin a response to climate change, and what that response should be.

The Stern Review received a lot of criticism from the anti-AGW crowd, and also from a certain brand of economists, partly because of the huge uncertainties involved in predicting such a wide range of events and outcomes so far in the future, and partly because of it particular assumptions. Of course, some people rejected it for being based on “alarmist” predictions from organizations like the IPCC, or rejected its fundamental assumption that climate change was happening. But one of the most persistent and effective criticisms of the Review was that it used the wrong discount rate, and thus it overemphasized the cost of rare events in the future compared to the cost of mitigation today.

I think Superstorm Sandy and the arctic ice renders that criticism invalid, and instead a better criticism of the Stern Review should now be that it significantly underestimates the cost of climate change, regardless of its choice of discount rate. Here I will attempt to explain why.

According to its critics, the Stern Review used a very low discount rate when it considered future costs. A discount rate is essentially a small percentage by which future costs are discounted relative to current costs, in order to reflect the preference humans have for getting stuff now. The classic, simplest discount rate simply applies an exponential reduction in costs over time with a very small rate (typically 2-5%), so that costs incurred 10 years from now are reduced by an amount exp(-10*rate). I use this kind of discounting in cost-effectiveness analysis, and a good rough approximation to its effects is to assume that, if costs are incurred constantly over a human’s lifetime, actually only about 40% of the total costs a person might be expected to incur will actually be counted now.

For example, if I am considering an intervention today that will save a life, and I assume that life will last 80 years, then from my perspective today that life is actually only really worth about 30 years. This reflects the fact that the community prefers to save years of life now, rather than in 70 years’ time, and also the fact that a year of life saved in 20 years time from an intervention enacted today is only a virtual year of life – the person I save tomorrow could be hit by a bus next week, and all those saved life years will be splattered over the pavement. The same kinds of assumptions can be applied to hurricane damage – if I want to invest $16 billion now on a storm surge barrier for New York, I can’t offset the cost by savings from a$50 billion storm in 50 years time, because $16 billion is worth more to people now than in 50 years’ time, even if we don’t consider inflation. I would love to have$16 billion now, but I probably wouldn’t put much stock on a promise of $16 billion in 50 years’ time, and wouldn’t change my behavior much in order to receive it[1]. Stern is accused of rejecting this form of discounting, and essentially using a discount rate of 0%, so that future events have the same value as current events. There are arguments for using this type of discounting when discussing climate change, because climate change is an intergenerational issue and high discount rates (of e.g. 3%) fundamentally devalue future generations relative to our own. Standard discounting is probably a logic that should only be applied when considering decisions made by people about issues in their own lifetimes. This defense has been made (the wikipedia link lists some people who made it), and it’s worth noting that many of the conservative economists who criticized the Stern Review for its discounting choice implicitly use Stern’s type of discounting when they talk about government debt – they complain extensively about “saddling future generations” with “our” debt, when their preferred discounting method would basically render the cost to those generations of our debt at zero. This debate is perhaps another example of how economists are really just rhetoricists rather than philosophers. But for now, let’s assume that the Stern Review got its discounting wrong, and should have used a standard discounting process as described above. The Stern Review also made judgments about the effects of climate change, largely along the lines of the published literature and especially on the material made available to the world through previous rounds of IPCC reports. For example, if you actually access the Stern Review, you will note that a lot of the assumptions it makes about the effects of climate change are essentially related to the temperature trend. That is, it lists the effects of a 2C increase in temperature, and then applies them in its model at the point that the temperature crosses 2C. For example, from page 15 of Part II, chapter 5 (the figure), we have this statement: If storm intensity increases by 6%, as predicted by several climate models for a doubling of carbon dioxide or a 3°C rise in temperature, this could increase insurers’ capital requirements by over 90% for US hurricanes and 80% for Japanese typhoons – an additional$76 billion in today’s prices.

The methods in the Stern Review are unclear, but this seems to be suggesting that the damage due to climate change is delayed in the analysis until temperature rises by 3C[2] – which will happen many years from now, in most climate models.

The assumptions in the Stern Review seem to be that the worst effects of climate change will begin many years from now, perhaps after 2020, and many (such as increased storm damage) will have to wait until the temperature passes 2C. There seems to be an assumption of a linear increase in storm damage, for example, which loads most storm damage into the far future.

This loading of storm and drought damage into the far future is the reason the discount issue became so important. If the storm damage is in the far future, then it needs to be heavily discounted, and the argument becomes that we should wait until much closer to the time to begin mitigating climate change. This argument is flawed for other reasons (you can’t stop climate change overnight, you have to act now because it’s the carbon budget, not the rate of emissions, that is most important to future damage), but it is valid as it applies to the debate about whether we should be acting to prevent climate change or prepare for climate change.

However, recent events have shown that this is irrelevant. Severe storm damage and droughts are happening now, and at least in the Atlantic rim these events are probably related to the collapse of the arctic ice load, and reductions in snow albedo across the far north. Stern’s analysis was based on most of these events happening in the far future, not now, and as a result his analysis has two huge flaws:

1. It underestimates the total damage due to climate change. Most economic analyses of this kind are conducted over a fixed time frame (e.g. 100 years), but for any fixed time frame, a model that assumes a gradual increase in damage over time is going to underestimate the total amount of damage that occurs over the period relative to a model that assumes that the damage begins now. Stern couldn’t assume the damage begins now, because those kinds of things weren’t known in 2006. But it has begun now – we need to accept that the IPCC was wrong in its core predictions. That means that the total damage occurring in the next 100 years is not going to be $X per year between 2050 and 2100, but$X per year between 2010 and 2100 – nearly twice as much damage.
2. The discount rate becomes irrelevant. Discount rates affect events far in the future, and have minimal effect now. If Stern had used a standard discount rate of 3%, then from his perspective in 2006 the current estimates of storm damage in the USA due to Sandy ($50 billion) would be about$42 billion. Also, all the damage in the USA due to Sandy is excess damage, because without the collapse of the arctic ice fields, Sandy would probably have headed out to sea, and done 0 damage. The estimated cost of the storm surge barrier mentioned above was $16 billion, so assuming that this cost is correct (unlikely) and it could have been built by now (impossible), that investment alone would have been worthwhile. Whereas if we assume a storm like Sandy won’t happen until 2050, the cost of the storm from Stern’s perspective is$14 billion, and we shouldn’t bother building the barrier now.

This means that the main conservative criticism of the Stern Review is now irrelevant – all that arcane debate about whether it’s more moral to value our future generations equally with now (Amartya Sen[3]) or whether we should focus on building wealth now and let our kids deal with the fallout (National Review Online) becomes irrelevant, because the damage has started now, and is very real to us, not to our potential grandchildren.

The bigger criticism that needs to be put is that Stern and the IPCC got climate change wrong. The world is looking at potentially serious food shortages next year, and in the last two years New York has experienced two major storm events (remember Irene’s storm surge was only 30cm below the level required to achieve the flooding we saw this week). Sandy occurred because of a freak coincidence of three events that are all connected in some way to global warming. We need to stop saying “it’s just weather” and start recognizing that we have entered the era of extreme events. Instead of writing reviews about what this generation needs to do to protect the environment for its children, we need to be writing reviews about what this generation can do to protect itself. Or better still, stop writing reviews and start acting.

fn1: This is a problem that has beset the organized religions for millenia. An eternity in heaven is actually not equivalent to many years on earth, if you discount it at 3% a year.

fn2: Incidentally, I’m pretty sure I was taught in physics that the use of the degree symbol in representing temperatures is incorrect. Stern uses the degree symbol. Economists!!! Sheesh!

fn3: Incidentally, I think in his published work, Sen uses the standard discounting method.

Next Page »