Science


Uhtred son of Uhtred, regular ale drinker, who I predict will die of injury (but will go to Valhalla, unlike you you ale-sodden wretch)

There has been some fuss in the media recently about a new study showing no level of alcohol use is safe. It received a lot of media attention (for example here), reversed a generally held belief that moderate consumption of alcohol improves health (this is even enshrined in the Greek food pyramid, which has a separate category for wine and olive oil[1]), and led to angsty editorials about “what is to be done” about alcohol. Although there are definitely things that need to be done about alcohol, prohibition is an incredibly stupid and dangerous policy, and so are some of its less odious cousins, so before we go full Leroy Jenkins on alcohol policy it might be a good idea to ask if this study is really the bees knees, and does it really show what it says it does.

This study is a product of the Global Burden of Disease (GBD) project, at the Institute for Health Metrics and Evaluation (IHME). I’m intimately acquainted with this group because I made the mistake of getting involved with them a few years ago (I’m not now) so I saw how their sausage is made, and I learnt about a few of their key techniques. In fact I supervised a student who, to the best of my knowledge, remains the only person on earth (i.e. the only person in a population of 7 billion people, outside of two people at IHME) who was able to install a fundamental software package they use. So I think I know something about how this institution does its analyses. I think it’s safe to say that they aren’t all they’re cracked up to be, and I want to explain in this post how their paper is a disaster for public health.

The way that the IHME works in these papers is always pretty similar, and this paper is no exception. First they identify a set of diseases and health conditions related to their chosen risk (in this case the chosen risk is alcohol). Then they run through a bunch of previously published studies to identify the numerical magnitude of increased risk of these diseases associated with exposure to the risk. Then they estimate the level of exposure in every country on earth (this is a very difficult task which they use dodgy methods to complete). Then they calculate the number of deaths due to the conditions associated with this risk (this is also an incredibly difficult task to which they apply a set of poorly-accredited methods). Finally they use a method called comparative risk assessment (CRA) to calculate the proportion of deaths due to the exposure. CRA is in principle an excellent technique but there are certain aspects of their application of it that are particularly shonky, but which we probably don’t need to touch on here.

So in assessing this paper we need to consider three main issues: how they assess risk, how they assess exposure, and how they assess deaths. We will look at these three parts of their method and see that they are fundamentally flawed.

Problems with risk assessment

To assess the risk associated with alcohol consumption the IHME used a standard technique called meta-analysis. In essence a meta-analysis collects all the studies that relate an exposure (such as alcohol consumption) to an outcome (any health condition, but death is common), and then combines them to obtain a single final estimate of what the numerical risk is. Typically a meta-analysis will weight all the risks from all the studies according to the sample size of the study, so that for example a small study that finds banging your head on a wall reduces your risk of brain damage is given less weight in the meta-analysis than a very large study of banging your head on a wall. Meta-analysis isn’t easy for a lot of reasons to do with the practical details of studies (for example if two groups study banging your head on a wall do they use the same definition of brain damage and the same definition of banging?), but once you iron out all the issues it’s the only method we have for coming to comprehensive decisions about all the studies available. It’s important because the research literature on any issue typically includes a bunch of small shitty studies, and a few high quality studies, and we need to balance them all out when we assess the outcome. As an example, consider football and concussion. A good study would follow NFL players for several seasons, taking into account their position, the number of games they played, and the team they were in, and compare them against a concussion free sport like tennis, but matching them to players of similar age, race, socioeconomic background etc. Many studies might not do this – for example a study might take 20 NFL players who died of brain injuries and compare them with 40 non-NFL players who died of a heart attack. A good meta-analysis handles these issues of quality and combines multiple studies together to calculate a final estimate of risk.

The IHME study provides a meta-analysis of all the relationships between alcohol consumption and disease outcomes, described as follows[2]:

we performed a systematic review of literature published between January 1st, 1950 and Dec 31st 2016 using Pubmed and the GHDx. Studies were included if the following conditions were met. Studies were excluded if any of the following conditions were met:

1. The study did not report on the association between alcohol use and one of the included outcomes.

2. The study design was not either a cohort, case-control, or case-crossover.

3. The study did not report a relative measure of risk (either relative risk, risk ratio, odds-ratio, or hazard ratio) and did not report cases and non-cases among those exposed and un-exposed.

4. The study did not report dose-response amounts on alcohol use.

5. The study endpoint did not meet the case definition used in GBD 2016.

There are many, many problems with this description of the meta-analysis. First of all they seem not to have described the inclusion criteria (they say “Studies were included if the following conditions were met” but don’t say what those conditions were). But more importantly their conditions for exclusion are very weak. We do not, usually, include case-control and case-crossover studies in a meta-analysis because these studies are, frankly, terrible. The standard method for including a study in a meta-analysis is to assess it according to the Risk of Bias Tool and dump it if it is highly biased. For example, should we include a study that is not a randomized controlled trial? Should we include studies where subjects know their assignment? The meta-analysis community have developed a set of tools for deciding which studies to include, and the IHME crew haven’t used them.

This got me thinking that perhaps the IHME crew have been, shall we say, a little sloppy in how they include studies, so I had a bit of a look. On page 53-55 of the appendix they report the results of their meta-analysis of the relationship between atrial fibrillation and alcohol consumption, and the results are telling. They found 9 studies to include in their meta-analysis but there are many problems with these studies. One (Cohen 1988) is a cross-sectional study and should not be included, according to the IHME’s own exclusion criteria. 6 of the remaining studies assess fribillation only, while 2 assess fibrillation and fibrial flutter, a pre-cursor of fibrillation. However most tellingly, all of these studies find no relationship between alcohol consumption and fibrillation at almost all levels of consumption, but their chart on page 54 shows that their meta-analysis found an almost exponential relationship between alcohol consumption and fibrillation. This finding is simply impossible given the observed studies. All 9 studies found no relationship between moderate alcohol consumption and fibrillation, and several found no relationship even for extreme levels of consumption, but somehow the IHME found a clear relationship. How is this possible?

Problems with exposure assessment

This problem happened because they applied a tool called DISMOD to the data to estimate the relationship between alcohol exposure and fibrillation. DISMOD is an interesting tool but it has many flaws. Its main benefit is that it enables the user to incorporate exposures that have many different categories of exposure definition that don’t match, and turn them into a single risk curve. So for example if one study group has recorded the relative risk of death for 2-5 drinks, and another group has recorded the risk for 1-12 drinks, DISMOD offers a method to turn this into a single curve that will represent the risk relationship per additional drink. This is nice, and it produces the curve on page 54 (and all the subsequent curves). It’s also bullshit. I have worked with DISMOD and it has many, many problems. It is incomprehensible to everyone except the two guys who programmed it, who are nice guys but can’t give decent support or explanations of what it does. It has a very strange response distribution and doesn’t appear to apply other distributions well, and it has some really kooky Bayesian applications built in. It is also completely inscrutable to 99.99% of people who use it, including the people at IHME. It should not be used until it is peer reviewed and exposed to a proper independent assessment. It is application of DISMOD to data that obviously shows no relationship between alcohol consumption and fibrillation that led to the bullshit curve on page 54 of the appendix, that does not have any relationship to the observed data in the collected studies.

This also applies to the assessment of exposure to alcohol. The study used DISMOD to calculate each country’s level of individual alcohol consumption, which means that the same dodgy technique was applied to national alcohol consumption data. But let’s not get hung up on DISMOD. What data were they using? The maps in the Lancet paper show estimates of risk for every African and south east Asian country, which suggests that they have data on these countries, but do you think they do? Do you think Niger has accurate estimates of alcohol consumption in its borders? No, it doesn’t. A few countries in Africa do and the IHME crew used some spatial smoothing techniques (never clearly explained) to estimate the consumption rates in other countries. This is a massive dodge that the IHME apply, which they call “borrowing strength.” At its most egregious this is close to simply inventing data – in an earlier paper (perhaps in 2012) they were able to estimate rates of depression and depression-related conditions for 183 (I think) countries using data from 97 countries. No prizes to you, my astute reader, if you guess that all the missing data was in Africa. The same applies to the risk exposure estimates in this paper – they’re a complete fiction. Sure for the UK and Australia, where alcohol is basically a controlled drug, they are super accurate. But in the rest of the world, not so much.

Problems with mortality assessment

The IHME has a particularly nasty and tricky method for calculating the burden of disease, based around a thing called the year of life lost (YLL). Basically instead of measuring deaths they measure the years of your life that you lost when you died, compared to an objective global standard of life you could achieve. Basically they get the age you died, subtract it from the life expectancy of an Icelandic or Japanese woman, and that’s the number of YLLs you suffered. Add that up for every death and you have your burden of disease. It’s a nice idea except that there are two huge problems:

  • It weights death at young ages massively
  • They never incorporate uncertainty in the ideal life expectancy of an Icelandic or Japanese woman

There is an additional problem in the assessment of mortality, which the IHME crew always gloss over, which is called “garbage code redistribution.” Basically, about 30% of every country’s death records are bullshit, and don’t correspond with any meaningful cause of death. The IHME has a complicated, proprietary system that they cannot and will not explain that redistributes these garbage codes into other meaningful categories. What they should do is treat these redistributed deaths as a source of error (e.g. we have 100,000 deaths due to cancer and 5,000 redistributed deaths, so we actually have 102500 plus/minus 2500 deaths), but they don’t, they just add them on. So when they calculate burden of disease they use the following four steps:

  • Calculate the raw number of deaths, with an estimate of error
  • Reassign dodgy deaths in an arbitrary way, without counting these deaths as any form of uncertainty
  • Estimate an ideal life expectancy without applying any measure of error or uncertainty to it
  • Calculate the years of life lost relative to this ideal life expectancy and add them up

So here there are three sources of uncertainty (deaths, redistribution, ideal life expectancy) and only one is counted; and then all these uncertain deaths are multiplied by the number of years lost relative to the ideal life expectancy.

The result is a dog’s breakfast of mortality estimates, that don’t come even close to representing the truth about the burden of disease in any country due to any condition.

Also, the IHME apply the same dodgy modeling methods to deaths (using a method that they (used to?) call CoDMoD) before they calculate YLLs, so there’s another form of arbitrary model decisions and error in their assessments.

Putting all these errors together

This means that the IHME process works like this:

  • An incredibly dodgy form of meta-analysis that includes dodgy studies and miscalculates levels of risk
  • Applied to a really shonky estimate of the level of exposure to alcohol, that uses a computer program no one understands applied to a substandard data set
  • Applied to a dodgy death model that doesn’t include a lot of measures of uncertainty, and is thus spuriously accurate

The result is that at every stage of the process the IHME is unreasonably confident about the quality of their estimates, produces excessive estimates of risk and inaccurate measures of exposure, and is too precise in its calculations of how many people died. This means that all their conclusions about the actual risk of alcohol, the level of exposure, and the magnitude of disease burden due to the conditions they describe cannot be trusted. As a result, neither can their estimates of the proportion of mortality due to alcohol.

Conclusion

There is still no evidence that moderate alcohol consumption is bad for you, and solid meta-analyses of available studies support the conclusion that moderate alcohol consumption is not harmful. This study should not be believed and although the IHME has good press contacts, you should ignore all the media on this. As a former insider in the GBD process I can also suggest that in future you ignore all work from the Global Burden of Disease project. They have a preferential publishing deal with the Lancet, which means they aren’t properly peer reviewed, and their work is so massive that it’s hard for most academics to provide adequate peer review. Their methods haven’t been subjected to proper external assessment and my judgement, based on having visited them and worked with their statisticians and their software, is that their methods are not assessable. Their data is certainly dubious at times but most importantly their analysis approach is not correct and the Lancet doesn’t subject it to proper peer review. This is going to have long term consequences for global health, and at some point the people who continue to associate with the IHME’s papers (they have hundreds or even thousands of co-authors) will regret that association. I stopped collaborating with this project, and so should you. If you aren’t sure why, this paper on alcohol is a good example.

So chill, have another drink, and worry about whether it’s making you fat.


fn1: There are no reasons not to love Greek food, no wonder these people conquered the Mediterranean and developed philosophy and democracy!

fn2: This is in the appendix to their study

Advertisements

No this really is not “the healthy one”

Today’s Guardian has a column by George Monbiot discussing the issue of obesity in modern England, that I think fundamentally misunderstands the causes of obesity and paints a dangerously rosy picture of Britain’s dietary situation. The column was spurred by a picture of a Brighton Beach in 1976, in which everyone was thin, and a subsequent debate on social media about the causes of the changes in British rates of overweight and obesity in the succeeding half a decade. Monbiot’s column dismisses the possibility that the growth in obesity could be caused by an increase in the amount we eat, by a reduction in the amount of physical activity, or by a change in rates of manual labour. He seems to finish the column by suggesting it is all the food industry’s fault, but having dismissed the idea that the food industry has convinced us to eat more, he is left with the idea that the real cause of obesity is changes in the patterns of what we eat – from complex carbohydrates and proteins to sugar. This is a bugbear of certain anti-obesity campaigners, and it’s wrong, as is the idea that obesity is all about willpower, which Monbiot also attacks. The problem here though is that Monbiot misunderstands the statistics badly, and as a result dismisses the obvious possibility that British people eat too much. He commits two mistakes in his article: first he misunderstands the statistics on British food consumption, and secondly he misunderstands the difference between a rate and a budget, which is ironic given he understands these things perfectly well when he comments on global warming. Let’s consider each of these issues in turn.

Misreading the statistics

Admirably, Monbiot digs up some stats from 1976 and compares them with statistics from 2018, and comments:

So here’s the first big surprise: we ate more in 1976. According to government figures, we currently consume an average of 2,130 kilocalories a day, a figure that appears to include sweets and alcohol. But in 1976, we consumed 2,280 kcal excluding alcohol and sweets, or 2,590 kcal when they’re included. I have found no reason to disbelieve the figures.

This is wrong. Using the 1976 data, Monbiot appears to be referring to Table 20 on page 77, which indicates a yearly average of 2280 kCal. But this is the average per household member, and does not account for whether or not a household member is a child. If we refer to Table 24 on page 87, we find that a single adult in 1976 ate an average of 2670 kCal; similar figures apply for two adult households with no children (2610 kCal). Using the more recent data Monbiot links to, we can see that he got his 2,130 kCal from the file of “Household and Eating Out Nutrient Intakes”. But if we use the file “HC – Household nutrient intakes” and look at 2016/17 for households with one adult and no children, we find 2291 kCal, and about 2400 as recently as 10 years ago. These are large differences when they accrue over years.

This is further compounded by the age issue. When we look at individual intake we need to consider how old the family members are. If an average individual intake is 2590 kCal in 1976 including alcohol and sweets, as Monbiot suggests, we need to rebalance it for adults and children. In a household with three people we have 7700 kCal, which if the child is eating 1500 kCal means that the adults are eating close to 3100 kCal each. That’s too much food for everyone in the house, even using the ridiculously excessive nutrient standards provided by the ONS.  It’s also worth remembering that the age of adults in 1976 was on average much younger than now, and an intake of 2590 might be okay for a young adult but it’s not okay for a 40-plus adult, of which there are many more now than there were then. This affects obesity statistics.

Finally it’s also worth remembering that obesity is not evenly distributed, and an average intake of 2100 kCal could correspond to an average of 2500 in the poorest 20% of the population (where obesity is common) and 1700 kCal in the richest, which is older and thinner. An evenly distributed 2100 kCal will lead to zero obesity over the whole population, but an unevenly distributed 2100 kCal will not. It’s important to look carefully at the variation in the datasets before deciding the average is okay.

Misunderstanding budgets and rates

Let’s consider the 2590 kCal that Monbiot finds as the average intake of adults in 1976, including alcohol and sweets. This is likely wrong, and the average is probably more like 3000 kCal including alcohol and sweets, but let’s go with it for now. Monbiot is looking to see what has changed in our diet over the past 40 years to lead to current rates of obesity, because he is looking for a change in the rate of consumption. But he doesn’t consider that all humans have a budget, and that a small excess of that budget over a long period is what drives obesity. The reality is that today’s obesity rates do not reflect today’s consumption rates, but the steady pattern of consumption over the past 40 years. What made a 55 year old obese today is what they ate in 1976 – when they were 15 – not what the average person eats today. So rather than saying “we eat less today than we did 40 years ago so that can’t be the cause of obesity”, what really matters is what people have been eating for the past 40 years. And the stats Monbiot uses suggest that women, at least, have been eating too much – a healthy adult woman should eat about 2100 kCal, and if the average is 2590 then a woman in 1976 has been at or above her energy intake every year for the past 40 years. It doesn’t matter that a woman’s intake declined to 2100 kCal in 2016, because she has been eating too much for the past 35 years anyway. It’s this budget, not changes over time, which determine the obesity rate now, and Monbiot is wrong to argue that it’s not overeating that has caused the obesity epidemic. Unless he accepts that a woman can eat 2590 kCal every year for 40 years and stay thin, he needs to accept that the problem of obesity is one of British food culture over half a century.

What this means for obesity policy

Somewhat disappointingly and unusually for a Monbiot article, there are no sensible policy prescriptions at the end except “stop shaming fat people.” This isn’t very helpful, and neither is it helpful to dismiss overeating as a cause, since everyone in public health knows that overeating is the cause of obesity. For example, Public Health for England wants to reduce British calorie intake, and the figures on why are disturbing reading. Reducing calorie intake doesn’t require shaming fat people but it does require acknowledgement that British people eat too much. This comes down not to individual willpower but to the food environment in which we all make choices about what to eat. The simplest way, for example, to reduce the amount that people eat is not to give them too much food. But there is simply no way in Britain that you can eat out or buy packaged food products without buying too much food. It is patently obvious that British restaurants serve too much food, that British supermarkets sell food in packages that are too large, and that as a result the only way for British people not to eat too much is through constant acts of will – leaving half the food you paid for, buying only fresh food in small amounts every day (which is only possible in certain wealthy inner city suburbs), and carefully controlling where, when and how you eat. This is possible but it requires either that you move in a very wealthy cultural circle where the environment supports this kind of thing, or that you personally exert constant control over your life. And that latter choice will inevitably end in failure, because constantly controlling every aspect of your food intake in opposition to the environment where you purchase, prepare and consume food is very very difficult.

When you live in Japan you live in a different food environment, which encourages small serving sizes, fresh and raw foods, and low fat and low sugar foods. In Japan you live in a food environment where you are always close to a small local supermarket with convenient opening hours and fresh foods, and where convenience stores sell healthy food in small serving sizes. This means that you can choose to buy small amounts of fresh food as and when you need them, and avoid buying in bulk in a pattern that encourages over consumption. When your food choices fail (for example you have to eat out, or buy junk food) you will have access to a small, healthy serving. If you are a woman you will likely have access to a “woman’s size” or “princess size” that means you can eat the smaller calorific food that your smaller calorific requirements suggest is wisest. It is easy to be thin in Japan, and so most people are thin. Overeating in Japan really genuinely is a choice that you have to choose to make, rather than the default setting. This difference in food environment is simple, obvious and especially noticeable when (as I just did) you hop on a plane to the UK and suddenly find yourself confronted with double helpings of everything, and super markets where everything is “family sized”. The change of food environment forces you to eat more. It’s as simple as that.

What Britain needs is a change in the food environment. And achieving a change in food environment requires first of all recognizing that British people eat too much, and have been eating too much for way too long. Monbiot’s article is an exercise in denialism of that simple fact, and he should change it or retract it.

The journal Molecular Autism this week published an article about the links between Hans Asperger and the Nazis in world war 2 Vienna, Austria. Hans Asperger is the paediatric pscyhiatrist on whose work Asperger’s syndrome is based, and after whom the syndrome is known. Until recently Asperger was believed to have been an anti-Nazi, someone who resisted the Nazis and risked his own career to protect some of his developmentally delayed patients from the Nazi “euthanasia” program, which killed or sterilized people with certain developmental disabilities for eugenics reasons.

The article, entitled Hans Asperger, National Socialism, and “race hygiene” in Nazi-era Vienna, is a thorough, well-researched and extensively documented piece of work, which I think is based on several years of detailed examination of primary sources, often in their original German. It uses these sources – often previously untouched – to explore and rebut several claims Asperger made about himself, and also to examine the nature of his diagnostic work during the Nazi era to see whether he was resisting or aiding the Nazis in their racial hygiene goals. In this post I want to talk a little about the background of the paper, and ask a few questions about the implications of these findings for our understanding of autism, and also for our practice as public health workers in the modern era. I want to make clear that I do not know much if anything about Asperger’s syndrome or autism, so my questions are questions, not statements of opinion disguised as questions.

What was known about Asperger

Most of Asperger’s history under the Nazis was not known in the English language press, and when his name was attached to the condition of Asperger’s syndrome he was presented as a valiant defender of his patients against Nazi racial hygiene, and as a conscientious objector to Nazi ideology. This view of his life was based on some speeches and written articles translated into English during the post war years, in particular a 1974 interview in which he claims to have defended his patients and had to be saved from being arrested by the Gestapo twice by his boss, Dr. Hamburger. Although some German language publications were more critical, in general Asperger’s statements about his own life’s work were taken at face value, and seminal works in 1981 and 1991 that introduced him to the medical fraternity did not include any particular reference to his activities in the Nazi era.

What Asperger actually did

Investigation of the original documents shows a different picture, however. Before Anschluss (the German occupation of Austria in 1938), Asperger was a member of several far right Catholic political organizations that were known to be anti-semitic and anti-democratic. After Anschluss he joined several Nazi organizations affiliated with the Nazi party. His boss at the clinic where he worked was Dr. Hamburger, who he claimed saved him twice from the Gestapo. In fact Hamburger was an avowed neo-nazi, probably an entryist to these Catholic social movements during the period when Nazism was outlawed in Vienna, and a virulent anti-semite. He drove Jews out of the clinic even before Anschluss, and after 1938 all Jews were purged from the clinic, leaving openings that enabled Asperger to get promoted. It is almost impossible given the power structures at the time that Asperger could have been promoted if he disagreed strongly with Hamburger’s politics, but we have more than circumstantial evidence that they agreed: the author of the article, Herwig Czech, uncovered the annual political reports submitted concerning Asperger by the Gestapo, and they consistently agreed that he was either neutral or positive towards Nazism. Over time these reports became more positive and confident. Also during the war era Asperger gained new roles in organizations outside his clinic, taking on greater responsibility for public health in Vienna, which would have been impossible if he were politically suspect, and his 1944 PhD thesis was approved by the Nazis.

A review of Asperger’s notes also finds that he did send at least some of his patients to the “euthanasia” program, and in at least one case records a conversation with a parent in which the child’s fate is pretty much accepted by both of them. The head of the institution that did the “euthanasia” killings was a former colleague of Asperger’s, and the author presents pretty damning evidence that Asperger must have known what would happen to the children he referred to the clinic. It is clear from his speeches and writings in the Nazi era that Asperger was not a rabid killer of children with developmental disabilities: he believed in rehabilitating children and finding ways to make them productive members of society, only sending the most “ineducable” children to institutional care and not always to the institution that killed them. But it is also clear that he accepted the importance of “euthanasia” in some instances. In one particularly compelling situation, he was put in charge – along with a group of his peers – of deciding the fate of some 200 “ineducable” children in an institution for the severely mentally disabled, and 35 of those ended up being murdered. It seems unlikely that he did not participate in this process.

The author also notes that in some cases Asperger’s prognoses for some children were more severe than those of the doctors at the institute that ran the “euthanasia” program, suggesting that he wasn’t just a fairweather friend of these racial hygiene ideals, and the author also makes the point that because Asperger remained in charge of the clinic in the post-war years he was in a very good position to sanitize his case notes of any connection with Nazis and especially with the murder of Jews. Certainly, the author does not credit Asperger’s claims that he was saved from the Gestapo by Hamburger, and suggests that these are straight-up fabrications intended to sanitize Asperger’s role in the wartime public health field.

Was Asperger’s treatment and research ethical in any way?

Reading the article, one question that occurred to me immediately was whether any of his treatments could be ethical, given the context, and also whether his research could possibly have been unbiased. The “euthanasia” program was actually well known in Austria at the time – so well known in fact that at one point allied bombers dropped leaflets about it on the town, and there were demonstrations against it at public buildings. So put yourself in the shoes of a parent of a child with a developmental disability, bringing your child to the clinic for an assessment. You know that if your child gets an unfavourable assessment there is a good chance that he or she will be sterilized or taken away and murdered. Asperger offers you a treatment that may rehabilitate the child. Obviously, with the threat of “euthanasia” hanging over your child, you will say yes to this treatment. But in modern medicine there is no way that we could consider that to be willing consent. The parent might actually not care about “rehabilitating” their child, and is perfectly happy for the child to grow up and be loved within the bounds of what their developmental disability allows them; it may be that rehabilitation is difficult and challenging for the child, and not in the child’s best emotional interests. But faced with that threat of a racial hygiene-based intervention, as a parent you have to say yes. Which means that in a great many cases I suspect that Asperger’s treatments were not ethical from any post-war perspective.

In addition, I also suspect that the research he conducted for his 1944 PhD thesis, in addition to being unethical, was highly biased, because the parents of these children were lying through their teeth to him. Again, consider yourself as the parent of such a child, under threat of sterilization or murder. You “consent” to your child’s treatment regardless of what might be in the child’s best developmental and emotional interests, and also allow the child to be enrolled in Asperger’s study[1]. Then your child will be subjected to various rehabilitation strategies, what Asperger called pedagogical therapy. You will bring your child into the clinic every week or every day for assessments and tests. Presumably the doctor or his staff will ask you questions about the child’s progress: does he or she engage with strangers? How is his or her behavior in this or that situation? In every situation where you can, you will lie and tell them whatever you think is most likely to make them think that your child is progressing. Once you know what the tests at the clinic involve, you will coach your child to make sure he or she performs well in them. You will game every test, lie at every assessment, and scam your way into a rehabilitation even if your child is gaining nothing from the program. So all the results on rehabilitation and the nature of the condition that Asperger documents in his 1944 PhD thesis must be based on extremely dubious research data. You simply cannot believe that the research data you obtained from your subjects is accurate when some of them know that their responses decide whether their child lives or dies. Note that this problem with his research exists regardless of whether Asperger was an active Nazi – it’s a consequence of the times, not the doctor – but it is partially ameliorated if Asperger actually was an active resister to Nazi ideology, since it’s conceivable in that case that the first thing he did was give the parent an assurance that he wasn’t going to ship their kid off to die no matter what his diagnosis was. But since we now know he did ship kids off to die, that possibility is off the table. Asperger’s research subjects were consenting to a research study and providing subjective data on the assumption that the study investigator was a murderer with the power to kill their child. This means Asperger’s 1944 work probably needs to be ditched from the medical canon, simply on the basis of the poor quality of the data. It also has implications, I think, for some of his conclusions and their influence on how we view Asperger’s syndrome.

What does this mean for the concept of the autism spectrum?

Asperger introduced the idea of a spectrum of autism, with some of the children he called “autistic psychopaths” being high functioning, and some being low functioning, with a spectrum of disorder. This idea seems to be an important part of modern discussion of autism as well. But from my reading of the paper [again I stress I am not an expert] it seems that this definition was at least partly informed by the child’s response to therapy. That is, if a child responded to therapy and was able to be “rehabilitated”, they were deemed high functioning, while those who did not were considered low functioning. We have seen that it is likely that some of the parents of these children were lying about their children’s functional level, so probably his research results on this topic are unreliable, but there is a deeper problem with this definition, I think. The author implies that Asperger was quite an arrogant and overbearing character, and it seems possible to me that his assumption that he is deeply flawed in assuming his therapy would always work and that if it failed the problem was with the child’s level of function. What if his treatment only worked 50% of the time, randomly? Then the 50% of children who failed are not “low-functioning”, they’re just unlucky. If we compare with a pharmaceutical treatment, it simply is not the case that when your drugs fail your doctor deems this to be because you are “low functioning”, and ships you off to the “euthanasia” clinic. They assume the drugs didn’t work and give you better, stronger, or more experimental drugs. Only when all the possible treatments have failed do they finally deem your condition to be incurable. But there is no evidence that Asperger considered the possibility that his treatment was the problem, and because the treatment was entirely subjective – the parameters decided on a case-by-case basis – there is no way to know whether the problem was the children or the treatment. So to the extent that this concept of a spectrum is determined by Asperger’s judgment of how the child responded to his entirely subjective treatment, maybe the spectrum doesn’t exist?

This is particularly a problem because the concept of “functioning” was deeply important to the Nazis and had a large connection to who got selected for murder. In the Nazi era, to quote Negan, “people were a resource”, and everyone was expected to be functioning. Asperger’s interest in this spectrum and the diagnosis of children along it wasn’t just or even driven by a desire to understand the condition of “autistic psychopathy”, it was integral to his racial hygiene conception of what to do with these children. In determining where on the spectrum they lay he was providing a social and public health diagnosis, not a personal diagnosis. His concern here was not with the child’s health or wellbeing or even an accurate assessment of the depth and nature of their disability – he and his colleagues were interested in deciding whether to kill them or not. Given the likely biases in his research, the dubious link between the definition of the spectrum and his own highly subjective treatment strategy, and the real reasons for defining this spectrum, is it a good idea to keep it as a concept in the handling of autism in the modern medical world? Should we revisit this concept, if not to throw it away at least to reconsider how we define the spectrum and why we define it? Is it in the best interests of the child and/or their family to apply this concept?

How much did Asperger’s racial hygiene influence ideas about autism’s heritability?

Again, I want to stress that I know little about autism and it is not my goal here to dissect the details of this disease. However, from what I have seen of the autism advocacy movement, there does seem to be a strong desire to find some deep biological cause of the condition. I think parents want – rightly – to believe that it is not their fault that their child is autistic, and that the condition is not caused by environmental factors that might somehow be associated with their pre- or post-natal behaviors. Although the causes of autism are not clear, there seems to be a strong desire of some in the autism community to see it as biological or inherited. I think this is part of the reason that Andrew Wakefield’s scam linking autism to MMR vaccines remains successful despite his disbarment in the UK and exile to America. Parents want to think that they did not cause this condition, and blaming a pharmaceutical company is an easy alternative to this possibility. Heritability is another alternative explanation to behavioral or environmental causes. Asperger of course thought that autism was entirely inherited, blaming it – and its severity – on the child’s “constitution”, which was his phrase for their genetic inheritance. This is natural for a Nazi, of course – Nazis believe everything is inherited. Asperger also believed that sexual abuse was due to genetic causes (some children had a genetic property that led them to “seduce” adults!) Given Asperger’s influence on the definition of autism, I think it would be a good idea to assess how much his ideas also influence the idea that autism is inherited or biologically determined, and to question the extent to which this is just received knowledge from the original researcher. On a broader level, I wonder how many conditions identified during the war era and immediately afterwards were influenced by racial hygiene ideals, and how much the Nazi medical establishment left a taint on European medical research generally.

What lessons can we learn about public health practice from this case?

It seems pretty clear that some mistakes were made in the decision to assign Asperger’s name to this condition, given what we now know about his past. It also seems clear that Asperger was able to whitewash his reputation and bury his responsibilities for many years, including potentially avoiding being held accountable as an accessory to murder. How many other medical doctors, social scientists and public health workers from this time were also able to launder their history and reinvent themselves in the post-war era as good Germans who resisted the Nazis, rather than active accomplices of a murderous and cruel regime? What is the impact of their rehabilitation on the ethics and practice of medicine or public health in the post-war era? If someone was a Nazi, who believed that murdering the sick, disabled and certain races for the good of the race was a good thing, then when they launder their history there is no reason to think they actually laundered their beliefs as well. Instead they carried these beliefs into the post war era, and presumably quietly continued acting on them in the institutions they now occupied and corrupted. How much of European public health practice still bears the taint of these people? It’s worth bearing in mind that in the post war era many European countries continued to run a variety of programs that we now consider to have been rife with human rights abuse, in particular the way institutions for the mentally ill were run, the treatment of the Roma people (which often maintained racial-hygiene elements even decades after the war), treatment of “promiscuous” women and single mothers, and management of orphanages. How much of this is due to the ideas of people like Asperger, propagating slyly through the post-war public health institutional framework and carefully hidden from view by people like Asperger, who were assiduously purging past evidence of their criminal actions and building a public reputation for purity and good ethics? I hope that medical historians like Czech will in future investigate these questions.

This is not just a historical matter, either. I have colleagues and collaborators who work in countries experiencing various degrees of authoritarianism and/or racism – countries like China, Vietnam, Singapore, the USA – who are presumably vulnerable to the same kinds of institutional pressures at work in Nazi Germany. There have been cases, for example, of studies published from China that were likely done using organs harvested from prisoners. Presumably the authors of those studies thought this practice was okay? If China goes down a racial hygiene path, will public health workers who are currently doing good, solid work on improving the public health of the population start shifting their ideals towards murderous extermination? Again, this is not an academic question: After 9/11, the USA’s despicable regime of torture was developed by two psychologists, who presumably were well aware of the ethical standards their discipline is supposed to maintain, and just ignored them. The American Psychological Association had to amend its code in 2016 to include an explicit statement about avoiding harm, but I can’t find any evidence of any disciplinary proceedings by either the APA or the psychologists’ graduating universities to take action for the psychologists’ involvement in this shocking scheme. So it is not just in dictatorships that public policy pressure can lead to doctors taking on highly unethical standards. Medical, pscyhological and public health communities need to take much stronger action to make sure that our members aren’t allowed to give into their worst impulses when political and social pressure comes to bear on them.

These ideas are still with us

As a final point, I want to note that the ideas that motivated Asperger are not all dead, and the battle against the pernicious influence of racial hygiene was not won in 1945. Here is Asperger in 1952, talking about “feeblemindedness”:

Multiple studies, above all in Germany, have shown that these families procreate in numbers clearly above the average, especially in the cities. [They] live without inhibitions, and rely without scruples on public welfare to raise or help raise their children. It is clear that this fact presents a very serious eugenic problem, a solution to which is far off—all the more, since the eugenic policies of the recent past have turned out to be unacceptable from a human standpoint

And here is Charles Murray in 1994:

We are silent partly because we are as apprehensive as most other people about what might happen when a government decides to social-engineer who has babies and who doesn’t. We can imagine no recommendation for using the government to manipulate fertility that does not have dangers. But this highlights the problem: The United States already has policies that inadvertently social-engineer who has babies, and it is encouraging the wrong women. If the United States did as much to encourage high-IQ women to have babies as it now does to encourage low-IQ women, it would rightly be described as engaging in aggressive manipulation of fertility. The technically precise description of America’s fertility policy is that it subsidizes births among poor women, who are also disproportionately at the low end of the intelligence distribution. We urge generally that these policies, represented by the extensive network of cash and services for low-income women who have babies, be ended. [Emphasis in the Vox original]

There is an effort in Trump’s America to rehabilitate Murray’s reputation, long after his policy prescriptions were enacted during the 1990s. There isn’t any real difference between Murray in 1994, Murray’s defenders in 2018, or Asperger in 1952. We now know what the basis for Asperger’s beliefs were. Sixty years later they’re still there in polite society, almost getting to broadcast themselves through the opinion pages of a major centrist magazine. Racial hygiene didn’t die with the Nazis, and we need to redouble our efforts now to get this pernicious ideology out of public health, medicine, and public policy. I expect that in the next few months this will include some uncomfortable discussions about Asperger’s legacy, and I hope a reassessment of the entire definition of autism, Asperger’s syndrome and its management. But we should all be aware that in these troubled times, the ideals that motivated Asperger did not die with him, and our fields are still vulnerable to their evil influence.

 


fn1: Note that you consent to this study regardless of your actual views on its merits, whether it will cause harm to your child, etc. because this doctor is going to decide whether your child “rehabilitates” or slides out of view and into the T4 program where they will die of “pneumonia” within 6 months, and so you are going to do everything this doctor asks. This is not consent.

The media this week are exploding with news that a company called Cambridge Analytica used shadily-obtained Facebook data to influence the US elections. The data was harvested by some other shady company using an app that legally exploited Facebook’s privacy rules at the time, and then handed over to Cambridge Analytica, who then used the data to micro-target adverts over Facebook during the election, mostly aimed at getting Trump elected. The news is still growing, and it appears that Cambridge Analytica was up to a bunch of other shady stuff too – swinging elections in developing countries through fraud and honey-traps, getting Facebook data from other sources and possibly colluding illegally with the Trump campaign against campaign funding laws – and it certainly looks like a lot of trouble is deservedly coming their way.

In response to this a lot of people have been discussing Facebook itself as if it is responsible for this problem, is itself a shady operator, or somehow represents a new and unique problem in the relationship between citizens, the media and politics. Elon Musk has deleted his company’s Facebook accounts, there is a #deleteFacebook campaign running around, and lots of people are suggesting that the Facebook model of social networking is fundamentally bad (see e.g. this Vox article about how Facebook is simply a bad idea).

I think a lot of this reaction against Facebook is misguided, does not see the real problem, and falls into the standard mistake of thinking a new technology must necessarily come with new and unique threats. I think it misses the real problem underlying Cambridge Analytica’s use of Facebook data to micro-target ads during the election and to manipulate public opinion: the people reading the ads.

We use Facebook precisely because of the unique benefits of its social and sharing model. We want to see our friends’ lives and opinions shared amongst ourselves, we want to be able to share along things we like or approve of, and we want to be able to engage with what our friends are thinking and saying. Some people using Facebook may do so as I do, carefully curating content providers we allow on our feed to ensure they aren’t offensive or upsetting, and avoiding allowing any political opinions we disagree with; others may use it for the opposite purpose, to engage with our friends’ opinions, see how they are thinking, and openly debate and disagree about a wide range of topics in a social forum. Many of us treat it as an aggregator for cat videos and cute viral shit; some of us only use it to keep track of friends. But in all cases the ability of the platform to share and engage is why we use it. It’s the one thing that separates it from traditional mass consumption media. This is its revolutionary aspect.

But what we engage with on Facebook is still media. If your friend shares a Fox and Friends video of John Bolton claiming that Hilary Clinton is actually a lizard person, when you watch that video you are engaging with it just as if you were engaging with Fox and Friends itself. The fact that it’s on Facebook instead of TV doesn’t suddenly exonerate you of the responsibility and the ability to identify that John Bolton is full of shit. If Cambridge Analytica micro target you with an ad that features John Bolton claiming that Hilary Clinton is a lizard person, that means Cambridge Analytica have evidence that you are susceptible to that line of reasoning, but the fundamental problem here remains that you are susceptible to that line of reasoning. Their ad doesn’t become extra brain-washy because it was on Facebook. Yes, it’s possible that your friend shared it and we all know that people trust their friends’ judgment. But if your friends think that shit is reasonable, and you still trust your friend’s judgement, then you and your friend have a problem. That’s not Facebook’s problem, it’s yours.

This problem existed before Facebook, and it exists now outside of Facebook. Something like 40% of American adults think that Fox News is a reliable and trustworthy source of news, and many of those people think that anything outside of Fox News is lying and untrustworthy “liberal media”. The US President apparently spends a lot of his “executive time” watching Fox and Friends and live tweeting his rage spasms. No one forces him to watch Fox and Friends, he has a remote control and fingers, he could choose to watch the BBC. It’s not Facebook’s fault, or even Fox News’s fault, that the president is a dimwit who believes anything John Bolton says.

This is a much bigger problem than Facebook, and it’s a problem in the American electorate and population. Sure, we could all be more media savvy, we could all benefit from better understanding how Facebook abuses privacy settings, shares our data for profit, and enables micro-targeting. But once that media gets to you it’s still media and you still have a responsibility to see if it’s true or not, to assess it against other independent sources of media, to engage intellectually with it in a way that ensures you don’t just believe any old junk. If you trust your friends’ views on vaccinations or organic food or Seth Rich’s death more than you trust a doctor or a police prosecutor then you have a problem. Sure, Facebook might improve the reach of people wanting to take advantage of that problem, but let’s not overdo it here: In the 1990s you would have been at a bbq party or a bar, nodding along as your friend told you that vaccines cause autism and believing every word of it. The problem then was you, and the problem now is you. In fact it is much easier now for you to not be the problem. Back in the 1990s at that bbq you couldn’t have surreptitiously whipped our your iPhone and googled “Andrew Wakefield” and discovered that he’s a fraud who has been disbarred by the GMA. Now you can, and if you choose not to because you think everything your paranoid conspiracy theorist friend says is true, the problem is you. If you’re watching some bullshit Cambridge Analytica ad about how Hilary Clinton killed Seth Rich, you’re on the internet, so you have the ability to cross reference that information and find out what the truth might actually be. If you didn’t do that, you’re lazy or you already believe it or you don’t care or you’re deeply stupid. It’s not Facebook’s fault, or Cambridge Analytica’s fault. It’s yours.

Facebook offers shady operatives like Robert Mercer the ability to micro-target their conspiracy theories and lies, and deeper and more effective reach of their lies through efficient use of advertising money and the multiplicative effect of the social network feature. It also gives them a little bit of a trust boost because people believe their friends are trustworthy. But in the end the people consuming the media this shady group produce are still people with an education, judgment, a sense of identity and a perspective on the world. They are still able to look at junk like this and decide that it is in fact junk. If you sat through the 2016 election campaign thinking that this con-artist oligarch was going to drain the swamp, the problem is you. If you thought that Clinton’s email practices were the worst security issue in the election, the problem is you. If you honestly believed The Young Turks or Jacobin mag when they told you Clinton was more militarist than Trump, the problem is you. If you believed Glenn Greenwald when he told you the real threat to American security was Clinton’s surveillance and security policies, the problem is you. If you believed that Trump cared more about working people than Hilary Clinton, then the problem is you. This stuff was all obvious and objectively checkable and easy to read, and you didn’t bother. The problem is not that Facebook was used by a shady right wing mob to manipulate your opinions into thinking Clinton was going to start world war 3 and hand everyone’s money to the bankers. The problem is that when this utter bullshit landed in your feed, you believed it.

Of course the problem doesn’t stop with the consumers of media but with the creators. Chris Cillizza is a journalist who hounded Clinton about her emails and her security issues before the election, and to this day continues to hound her, and he worked for reputable media organizations who thought his single-minded obsession with Clinton was responsible journalism. The NY Times was all over the email issues, and plenty of NY Times columnists like Maureen Dowd were sure Trump was less militarist than Clinton. Fox carefully curated their news feed to ensure the pussy-grabbing scandal was never covered, so more Americans knew about the emails than the pussy-grabbing. Obviously if no one is creating content about how terrible Trump is then we on Facebook are not able to share it with each other. But again the problem here is not Facebook – it’s the American media. Just this week we learn that the Atlantic, a supposedly centrist publication, is hiring Kevin D Williamson – a man who believes women who get abortions should be hanged – to provide “balance” to its opinion section. This isn’t Facebook’s fault. The utter failure of the US media to hold their government even vaguely accountable for its actions over the past 30 years, or to inquire with any depth or intelligence into the utter corruption of the Republican party, is not Facebook’s fault or ours, it’s theirs. But it is our job as citizens to look elsewhere, to try to understand the flaws in the reporting, to deploy our education to the benefit of ourselves and the civic society of which we are a part. That’s not Facebook’s job, it’s ours. Voting is a responsibility as well as a right, and when you prepare to vote you have the responsibility to understand the information available about the people you are going to vote for. If you decide that you would rather believe Clinton killed Seth Rich to cover up a paedophile scandal, rather than reading the Democratic Party platform and realizing that strategic voting for Clinton will benefit you and your class, then the problem is you. You live in a free society with free speech, and you chose to believe bullshit without checking it.

Deleting Facebook won’t solve the bigger problem, which is that many people in America are not able to tell lies from truth. The problem is not Facebook, it’s you.

 

Nail them to the wall

In September 2017 Philip Morris International (PMI) – one of the world’s largest cigarette companies – introduced a new foundation to the world: The Foundation for a Smoke Free World. This foundation will receive $80 million per year from PMI for the next 12 years and devote this money to researching “smoking cessation, smoking harm reduction and alternative livelihoods for tobacco farmers”, with the aim to draw in more money from non-tobacco donors over that time. It is seeking advice on how to spend its research money, and it claims to be completely independent of the tobacco industry – it receives money from PMI to the tune of almost a billion dollars, but it claims to have a completely independent research agenda.

The website for the Foundation includes a bunch of compelling statistics on its front page: There is one death every six seconds from smoking, 7.2 million deaths annually, second-hand smoke kills 890,000 people annually, and smoking kills half of all its long-term users. It’s fascinating that a company that as late as the late 1990s was claiming there is no evidence its product kills has now set up a foundation with such powerful admission of the toxic nature of its product. It’s also wrong: the most recent research suggests that 2/3 of users will die from smoking. It’s revealing that even when PMI is being honest it understates the true level of destruction it has wrought on the human race.

That should serve as an object lesson in what this Foundation is really about. It’s not an exercise in genuine tobacco control, but a strategy to launder PMI’s reputation, and to escape the tobacco control deadlock. If PMI took these statistics seriously it could solve the problem it appears to have identified very simply, by ceasing the production of cigarettes and winding up its business. I’m sure everyone on earth would applaud a bunch of very rich tobacco company directors who awarded themselves a fat bonus and simply shut down their business, leaving their shareholders screwed. But that’s not what PMI wants to do. They want to launder their reputation and squirm out from under the pressure civil society is placing on them. They want to start a new business looking all shiny and responsible, and the Foundation is their tool.

PMI have another business model in mind. PMI are the mastermind behind iQos, the heat-not-burn product that they are trialling with huge success in Japan. This cigarette alternative still provides its user with a nicotine hit but it does it through heating a tobacco substance, rather than burning it, avoiding much of the carcinogenic products of cigarettes. PMI have been touting this as the future alternative to cigarettes, and are claiming huge market share gains in Japan based on the product. Heat not burn technologies offer clear harm reduction opportunities for tobacco use: although we don’t know what their toxicity is, it’s almost certainly much lower than tobacco, and every smoker who switches to iQos is likely significantly reducing their long term cancer risk. What PMI needs is for the world to adopt a harm reduction strategy for smoking, so that they can switch from cigarettes to iQos. But the tobacco control community is still divided on whether harm reduction is a better approach than prohibition and demand reduction, which between them have been very successful in reducing smoking.

So isn’t it convenient that there is a new Foundation with a billion dollars to spend on a research platform of “smoking cessation, harm reduction and alternative livelihoods.” It’s as if this Foundation’s work perfectly aligns with PMI’s business strategy. And is it even big money? Recently PMI lost a court case against plain packaging in Australia – because although their foundation admits that smoking kills, they weren’t willing to let the Australian government sell packages that say as much – and have to pay at least $50 million in costs. PMI’s sponsorship deal with Ferrari will cost them $160 million. They spent $24 million fighting plain packaging laws in Urugay (population: 4 million). $80 million is not a lot of money for them, and they will likely spend as much every year lobbying governments to postpone harsh measures, fighting the Framework Convention on Tobacco Control, and advertising their lethal product. This Foundation is not a genuine vehicle for research, it’s an advertising strategy.

It’s a particularly sleazy advertising strategy when you consider the company’s history and what the Foundation claims to do. This company fought any recognition that its products kill, but this Foundation admits that the products kill, while PMI itself continues to fight any responsibility for the damage it has done. This company worked as hard as it could for 50 years to get as many people as possible addicted to this fatal product, but this Foundation headlines its website with “a billion people are addicted and want to stop”. This Foundation will research smoking cessation while the company that funds it fights every attempt to prevent smoking initiation in every way it can. The company no doubt knows that cessation is extremely difficult, and that ten dollars spent on cessation are worth one dollar spent on initiation. It’s precious PR in a time when tobacco companies are really struggling to find anything good to say about themselves.

And as proof of the PR gains, witness the Lancet‘s craven editorial on the Foundation, which argues that public health researchers and tobacco control activists should engage with it rather than ostracizing it, in the hope of finding some common ground on this murderous product. The WHO is not so pathetic. In a press release soon after the PMI was established they point out that it directly contravenes Article 5.3 of the Framework Convention on Tobacco Control, which forbids signatories from allowing tobacco companies to have any involvement in setting public health policy. They state openly that they won’t engage with the organization, and request that others also do not. The WHO has been in the forefront of the battle against tobacco and the tobacco industry for many years, and they aren’t fooled by these kinds of shenanigans. This is an oily trick by Big Tobacco to launder their reputation and try to ingratiate themselves with a world that is sick of their tricks and lies. We shouldn’t stand for it.

I think it’s unlikely that researchers will take this Foundation’s money. Most reputable public health journals have a strict rule that they will not publish research funded by tobacco companies or organizations associated with them, and it is painfully obvious that this greasy foundation is a tobacco company front. This means that most researchers won’t be able to publish any research they do with money from this foundation, and I suspect this means they won’t waste their time applying for the money. It seems likely to me that they will struggle to disburse their research funds in a way that, for example, the Bill and Melinda Gates Foundation do not. I certainly won’t be trying to get any of this group’s money.

The news of this Foundation’s establishment is not entirely bad, though. It’s existence is a big sign that the tobacco control movement is winning. PMI know that their market is collapsing and their days are numbered. Sure they can try and target emerging markets in countries like China but they know the tobacco control movement will take hold in those markets too, and they’re finding it increasingly difficult to make headway. Smoking rates are plummeting in the highest profit markets, and they’re forced to slimmer pickings in developing countries where tobacco control is growing in power rapidly. At the same time their market share is being stolen in developed countries by e-cigarettes, a market they have no control over, and as developing nations become wealthier and tobacco control strengthens e-cigarettes grow in popularity there too. They can see their days are numbered. Furthermore, the foundation is a sign that the tobacco companies’ previous united front on strategy is falling apart. After the UK high court rejected a tobacco company challenge to plain packaging laws, PMI alone decided not to join an appeal, and now PMI has established this foundation. This is a sign that the tobacco companies are starting to lose their previous powerful allegiance on strategy against the tobacco control movement. PMI admits they’ve lost, has developed iQos, and is looking to find an alternative path to the future while the other tobacco companies fight to defend their product.

But should PMI be allowed to take their path? From a public health perspective it’s a short term gain if PMI switch to being a provider of harm reducing products. But there are a bunch of Chinese technology companies offering e-cigarettes as an alternative to smoking. If we allow PMI to join that harm reduction market they will be able to escape the long term consequences of their business decisions. And should they be allowed to? I think they shouldn’t. I think the tobacco companies should be nailed to the wall for what they did. For nearly 70 years these scumbags have denied their products caused any health problems, have spent huge amounts of money on fighting any efforts to control their behavior, and have targeted children and the most vulnerable. They have spent huge amounts of money establishing a network of organizations, intellectuals and front groups that defend their work but – worse still – pollute the entire discourse of scientific and evidence based policy. The growth of global warming denialism, DDT denialism, and anti-environmentalism is connected to Big Tobacco’s efforts to undermine scientific evidence for decent public health policy in the 1980s and 1990s. These companies have done everything they can to pollute public discourse over decades, in defense of a product that we have known is poison since the 1950s. They have had a completely pernicious effect on public debate and all the while their customers have been dying. These companies should not be allowed to escape the responsibility for what they did. Sure, PMI could develop and market a heat-not-burn product or some kind of e-cigarette: but should we let them, when some perfectly innocent Chinese company could steal their market share? No, we should not. Their murderous antics over 70 years should be an albatross around their neck, dragging these companies down into ruin. They should be shackled to their product, never able to escape from it, and their senior staff should never be allowed to escape responsibility for their role in promoting and marketing this death. The Foundation for a Smoke Free World is PMI’s attempt to escape the shackles of a murderous poison that it flogged off to young and poor people remorselessly for 70 years. They should not be allowed to get away with it – they should be nailed to the wall for what they did. Noone should cooperate with this corrupt and sleazy new initiative. PMI should die as if they had been afflicted with the cancer that is their stock in trade, and they should not be allowed to worm out from under the pressure they now face. Let them suffer for the damage they did to human bodies and civil society, and do not cooperate with this sick and cynical Foundation.

Two days ago I wrote a scathing review of Star Wars: The Last Jedi, and since then I have been digging around for others’ views on the matter. The Guardian has an article giving some fans’ reviews, and the below the line comments are suitably critical of this awful movie. Meanwhile Vox has a pathetic, self-serving article by a film critic attempting to explain why so many people have such different views to the critics. This article includes such great insights as “critics don’t really care about plot” which is dismissed as a “nitty gritty detail” of a movie – they’re more interested in themes and emotional struggles, apparently, which suggests they’d be more at home at a My Chemical Romance gig than a decent movie. How did they get the job?

In amongst the complaints on the Guardian‘s article, and at the centre of the Vox piece, is a particularly vicious little dismissive claim: That a lot of the negative reaction to the movie arises from long term fans[1], who cannot handle what Rian Johnson did with their cherished childhood movie, and are unrepresentative of the broader movie-going public. In the more vernacular form of some of the BTL comments on the Guardian article, fanboys are pissed off because Rian Johnson didn’t make the movie exactly the way they wanted. This, apparently, explains the difference between the critics’ view of the movie and the people giving a review on the Rotten Tomatoes website.

I thought this sounded fishy, so I decided to collect a little bit of data from the Rotten Tomatoes website and have a look at just how far fanboys typically deviate from critics. I figured that if fanboys’ disappointment with not getting a movie exactly as they wanted it was the driver of negative reactions to this movie, we should see it in other Star Wars movies. We should also see it in other movies with a strong fanboy following, and maybe we wouldn’t see it in movies that don’t have strong preconceptions. I collected data on critics’ and fans’ aggregated review statistics for 35 movies from the Rotten Tomatoes website. For each movie I calculated a score, which I call the Odds Ratio of Critical Acceptance (ORCA). This is calculated as follows:

1. Calculate an odds for the critics’ aggregate score, O1, which is (score)/(1-score)

2. Calculate an odds for the viewers’ aggregate score, O2, which is (score)/(1-score)

3. Calculate their ratio, ORCA=O1/O2

I use this score because it accounts for the inherent limits on the value of a critical score. The Last Jedi got a critics’ score of 0.93, which is very close to the upper limit of 1. If the viewers’ score was, for example, 0.83, it is 0.1 lower than the critics’ score. But this 0.1 is a much larger gap than, say, the difference between a critics’ score of 0.55 and a viewers’ score of 0.45. Similarly, if critics give a movie a value of 0.1 and viewers a value of 0.2, this means viewers thought it was twice as good – whereas values of 0.45 and 0.55 are much less different. We use this kind of odds ratio in epidemiology a lot because it allows us to properly account for small differences when one score is close to 1, as (inexplicably) it is for this horrible movie. Note that ORCA scores above 1 indicate that the critics gave the movie a higher score than the viewers, and scores below 1 indicate that the viewers liked the movie more than the critics.

I collected scores for all the Star Wars movies, all three Lord of the Rings movies, both Ghost in the Shell movies (the Japanese and the western remake), both Blade Runners, Alien:Covenant, two Harry Potter movies, Fifty Shades of Grey, and Gedo Senki (the (filthy) Studio Ghibli version of A Wizard of Earthsea), as examples of movies with a fanboy following. As readers of my blog are no doubt very aware, the Lord of the Rings fanboys are absolutely filthy, and if anyone is going to sink a movie over trivial shit they will. Ghost in the Shell is a remake of a movie with a very strong otaku following of the worst kind, and also suffers from a huge controversy over whitewashing, and Gedo Senki is based on one of the world’s most popular books, by a woman who has an intense generation-spanning cadre of fans who are obssessed with her work. Harry Potter fans are also notoriously committed. I also gathered a bunch of movies that I like or that I thought would be representative of the kinds of movies that did not have a following before they were released: Mad Max Fury Road, Brokeback Mountain, that new movie about a transgender bull[3], Ferdinand, things like that. I figured that some of these movies would not get a big divergence in ORCA if the fanboy theory is true.

Figure 1: ORCA Scores for a range of movies, none apparently as shit as The Last Jedi.

Results of my calculations are shown in Figure 1 (sorry about the fiddly size). The Last Jedi is on the far left, and is obviously a massive outlier, with an ORCA score of 10.9. This score arises because it has a critics’ score of 93%, but a score from fans of 55%[4]. Next is Mad Max: Fury Road, which was not as successful with fans as with critics but still got a rating of 0.85 from fans. It can be noted that several Star Wars movies lie to the right of the pale blue dividing line, indicating that fans liked them more than did critics – this includes Rogue One and The Phantom Menace, showing that this phenomenon was not limited to the first generation movies. Note that Fellowship of the Ring, the LoTR movie most likely to disappoint fanboys under the theory that fanboys want the director to make the movie in their heads, had an ORCA value of 0.53, indicating fans had twice the odds of liking it than did critics. Gedo Senki also did better with fans than critics despite being a terrible movie that completely pisses all over Ursula Le Guin’s original book.

There’s no evidence at all from this data that fanboys respond badly to movies based on not getting the movie in their head, and there’s no evidence that Star Wars fanboys are particularly difficult to please. The ORCA score for The Last Jedi is at least 12 parsecs removed from the ORCA score for the next worse movie in the series, which (despite that movie also being a pile of shit) is not that high – it’s lower than Dunkirk, in fact, which was an awesome movie with no pre-existing fanbase[5]. Based on this data it should be pretty clear that either the “toxic fandom” of Star Wars has been hiding for the past 10 years as repeated bad movies were made – or this movie is uniquely bad, and the critics were uniquely stupid to give it a good score.

I’m going with the latter conclusion, and I want the movie critics to seriously re-evaluate how they approached this movie. Star Wars clearly gets a special pass from critics because it’s so special, and Star Wars directors can lay any stinking turd on the screen and get a pass from critics for some incomprehensible reason. Up your game, idiots.

A few minor side points about critical reviews of The Last Jedi

I’ve been generally shocked by the way in which this movie is being hailed as a critical masterpiece. I really can’t see how this can be. Even if it’s not as bad as I think, I can’t understand how it can get similar scores to movies like Dunkirk, Mad Max: Fury Road, or Titanic. Those movies are infinitely better crafted than this pile of junk, with tight and carefully designed plots that clearly hold together under extensive criticism. There is nothing extraneous at all in Titanic or Dunkirk, not one moment that you could say isn’t directly relevant to the unfolding story, and the acting in all three of these movies is exemplary. Worse still, the Guardian is now claiming that Star Wars is the most triumphantly feminist movie yet. This is utter bullshit on its face: The main male character, Po Dameron, repeatedly undermines female leaders, and their attempts to discipline him are ignored, ultimately leading to the death of probably 200 people in a completely avoidable catastrophe, and he suffers no consequences for his dishonesty and treachery. Furthermore, he takes over the main role from Finn, the black character, and Rei is sidelined into a supplicant to an aging white man. As a moral story for entitled white men who can’t bear to be told what to do by women it’s exemplary. But this is even more horrific when you consider that Mad Max: Fury Road is a savage eco-feminist masterpiece, and undoubtedly the most triumphantly feminist movie ever made. This is another example of the weird special pass that Star Wars movies get: they make piss poor tokenistic gestures towards diversity and the critics are claiming they’re the most woke movie ever made.

There’s a strange irony in this. Star Wars fanboys are being blamed for obstinately marking this movie down on the basis of silly stereotypes about nerds, when in fact it’s the critics themselves who are acting like Star Wars sycophants, giving one of the worst movies of the millenium sterling marks for trying. Unless of course the conspiracy theories are true, and they’re all paid by Disney.

I won’t be so cynical. They’re just stupid and wrong, and in future I recommend not listening to reviewers before going to see any movie. Trust the viewers, they have much better judgment!

UPDATE: I have swapped my shoddy figure with a figure supplied by reader frankelavsky, who apparently actually knows how to do visual stuff, so it’s now much easier to see how terribly wrong the reviewers were.


fn1: Which, inexplicably, the Vox article seems to view as Baby Boomers, which is weird since most people want to now pretend Star Wars is a kid’s movie (it’s not[2]). Many of the fans saw it as kids, it’s true, but that’s because we were Gen X, not baby boomers. More importantly, Star Wars fandom crosses three generations, and includes a lot of Generation Y. It’s just dumb to even hint that the themes in the movie pissed off the fans because baby boomers don’t like the idea of handing on the baton to a new, more diverse generation. Star Wars fans aren’t baby boomers, and why would baby boomers have a problem with this anyway?

fn2: How fucking stupid is modern pop cultural analysis of pop culture, and how far has it fallen, that people could say this?

fn3: This is a joke. See here for more details.

fn4: It was 56% yesterday. This movie is sinking by the day.

fn5: Barring UKIP, I guess

UPDATE: Dr. Monnat has left a comment pointing out that I made a major error in reading her methods (I assumed she used non-standardized rates but in the methods she specifies that she did). So I have removed one criticism of her paper and modified another about regression. This doesn’t change the thrust of my argument (though if Dr. Monnat is patient enough to engage with more of my criticisms, maybe it will!)

Since late 2016 a theory has been circulating that Donald Trump’s election victory can be related to the opioid epidemic in rust belt America. Under this theory, parts of mid-West America with high levels of unemployment and economic dislocation that are experiencing high levels of opioid addiction switched votes from Democrat to Republican and elected Trump. This is part of a broader idea that America is suffering an epidemic of “deaths of despair” – deaths due to opioids, suicide and alcohol abuse – that are part of a newfound social problem primarily afflicting working class white people, and the recent rapid growth in the rate of these “deaths of despair” drove a rebellion against the Democrats, and towards Trump.

This theory is bullshit, for a lot of reasons, and in this post I want to talk about why. To be clear, it’s not just a bit wrong: it’s wrong in all of its particulars. The data doesn’t support the idea of a growing death rate amongst white working class people; the data does not support a link between “deaths of despair” and Trump voting; there is no such thing as a “death of despair”; and there is no viable explanation for why an epidemic of “deaths of despair” should drive votes for Trump. The theory is attractive to a certain kind of theorist because it enables them to pretend that the Trump phenomenon doesn’t represent a deep problem of racism in American society, but it doesn’t work. Let’s look at why.

The myth of rising white mortality

First let’s consider the central framework of this story, which is the idea that mortality rates have been rising rapidly among middle-aged whites in America over the past 20 years, popularized by two economists (Case and Deaton) in a paper in PNAS. This paper is deeply flawed because it does not adjust for age, which has been increasing rapidly among white Americans but not non-white Americans (due to differential birth and migration patterns in earlier eras). Case and Deaton studied mortality in 45-54 year old Americans, differentiating by race, but failed to adjust for age. This is important for surprising reasons, which perhaps only epidemiologists understand, and we’re only figuring this out recently and slowly: ageing is happening so fast in high-income countries that even when we look at relatively narrow age categories we need to take into account the possibility that the older parts of the age category have a lot more people than the younger parts, and this means that even the small differences in mortality between say 53 year olds and 45 year olds can make a difference to mortality rates in the age category as a whole. If this seems shocking, consider the case of Japan, where ageing is so advanced that even five year age categories (the finest band of age that most statistical organizations will present publicly) are vulnerable to differences in the population. In Japan, the difference in the size of the 84 year old population to the 80 year old population is so great that we may need to adjust for age even when looking at narrow age categories like 80-84 years. This problem is a new challenge for epidemiologists – we used to assume that if you reduce an analysis to a 10 or 15 year age category you don’t need to standardize, because the population within such a band is relatively stable, but this is no longer true.

In the case of the Case and Deaton study the effect of ageing in non-hispanic white populations is so great that failure to adjust for it completely biases their results. Andrew Gelman describes the problem  on his blog and presents age-adjusted data and data for individual years of age, showing fairly convincingly that the entire driver of the “problem” identified by Case and Deaton is age, not ill health. After adjustment it does appear that some categories of white women are seeing an increasing mortality rate, but this is a) likely due to the recent growth of smoking in this population and b) not a likely explanation for Trump’s success, since he was more popular with men than women.

White people are dying more in America because they’re getting older, not because they have a problem. I happen to think that getting older is a problem, but it’s not a problem that Trump or anyone else can fix.

The myth of “deaths of despair” and Trump voting

Case and Deaton followed up their paper on white mortality with further research on “deaths of despair” – deaths due to opioid abuse, suicide and alcohol use that are supposedly due to “despair”. This paper is a better, more exhaustive analysis of the problem but it is vulnerable to a lot of basic epidemiological errors, and the overall theory is ignorant of basic principles in drug and alcohol theory and suicide research. This new research does not properly adjust for age in narrow age groups, and it does not take into account socioeconomic influences on mortality due to these conditions. But on this topic Case and Deaton are not the main offenders – they did not posit a link between “deaths of despair” and Trump voting, which was added by a researcher called Shannon Monnat at Pennsylvania State University in late 2016. In her paper, Monnat argues for a direct link between rates of “deaths of despair” and voting for Trump at the county level, suggesting that voting for Trump was somehow a response to the specific pressures affecting white Americans. There are huge flaws in this paper, which I list here, approximately in their order of importance.

  • It includes suicide: Obviously a county with high suicide mortality is in a horrible situation, which should be dealt with, but there is a big problem with using suicide as a predictor of Trump voting. This problem is guns. Uniquely among rich countries, the US has a very high prevalence of gun ownership and guns account for a much larger proportion of suicides in America than elsewhere – more than half, according to reputable studies. And unfortunately for rural Americans, the single biggest determinant of whether you commit suicide by gun is owning a gun – and gun ownership rates are much higher in counties that vote Republican. In America suicide is a proxy for gun ownership, not “despair”, and because gun-related suicide depends heavily on rates of gun ownership, inclusion of this mortality rate in the study heavily biases the total mortality rate being used towards a measure of gun ownership rather than despair.
  • It uses voting changes rather than voting odds: Like most studies of voting rates, Monnat compared the percentage voting for Trump with the percentage voting for Romney in 2012. This is a big flaw, because percentages do not vary evenly across their range. In Monnat’s study a county that increased its Republican voting proportion from 1% to 2% is treated exactly the same as a county that went from 50% to 51%. In one of these counties the vote doubled and Trump didn’t get elected; in the other it increased by 2% but Trump got elected. It’s important to account for this non linearity in analysis, but Monnat did not. Which leads to another problem …
  • It did not measure Trump’s success directly: In a first past the post electoral system, who wins is more important than by how much. Monnat used an ordinary least squares model of proportions voting Trump rather than a binomial model of Trump winning or losing, which means that meaningless small gains in “blue” states[1] had the same importance as small gains in “red” states that flipped them “blue”. This might not be important except that we know Trump lost the popular vote (which differences in proportions measure) but won the electoral college (which more closely resembles binary measures of win/lose). Not analyzing binary outcomes in a binomial model suggests you don’t understand the relationship between statistics and the political system you live in, i.e. your analysis is wrong.
  • It did not incorporate turnout: A 52% win for Trump can reflect two things – a change in attitude by 2% of the voters, or a non-proportionate increase in the number of people who chose to turn out and vote. If you analyze proportions (or differences in proportions) you don’t account for this problem. In addition, you don’t adjust for the overall size of the electorate. If you analyze proportions, an electorate where 52 people voted Trump and 48 people voted Clinton is given the same weight as an electorate where 5200 people voted Clinton and 4800 people voted Trump. If you use a proper binomial model, however, the latter electorate gets more weight and is implicitly treated as more meaningful in the assessment of results. A reminder of what is fast becoming a faustusnotes rule: the cool kids do not use ordinary least squares regression to analyze probabilities, we always use logistic regression.
  • It did not present the regression results: Although Monnat reports regression results in a footnote, the main results in the text are all unadjusted, even though in at least some states the impact of economic factors appears to eliminate the relationship with mortality rates. Given that people who own guns are much much more likely to vote Republican, and the main predictor variable here incorporated suicide, adjustment for gun ownership might have eliminated the effect of “deaths of despair” entirely. But it wasn’t done as far as I can tell, and wasn’t shown.
  • It did not adjust for trends: Monnat openly states in the beginning of the paper that “deaths of despair” have been rising over time but when she conducts the analysis she uses the average rate for the period 2006-2014. This means that she does not consider the possibility that mortality has been dropping in some counties and rising in others. A mortality rate of 100 per 100,000 could reflect a decline over the period 2006-2014 from 150 to 50 (a huge decrease) or an increase from 25 to 175. We don’t know, but it seems likely that if “deaths of despair” is an issue, it will have had more influence on electoral decisions in 2016 in counties where the rate has risen over that time than where it has declined. There are lots of policy reasons why the death rate might have increased or decreased, but whether these reflect issues relevant to Republican or Democrat policy is impossible to know without seeing the distribution of trends – which Monnat did not analyze[2].

So in summary the study that found this “relationship” between “deaths of despair” and voting Trump was deeply flawed. There is no such relationship in the data[3].

There is no such thing as a “death of despair”

This study has got a fair bit of attention on the internet, as have the prior Case and Deaton studies. For example here we see a Medium report on the “Oxy electorate” that repeats all these sour talking points, and in this blog post some dude who fancies himself a spokesperson for ordinary America talks up the same issue. The latter blog post has some comments by people taking oxycontin for pain relief, who make some important points that the “deaths of despair” crew have overlooked. To quote one commenter[4]:

I too am a long time chronic pain sufferer and until I was put on opiate medications my quality of life was ZERO. I’ve heard horror stories of people actually being suicidal because they can no longer deal with the constant pain. It took me two years before I realized I could no longer work as an account manager with a major telecom company. I was making decent money but leaving work everyday in pain. I finally started going to a pain management doctor who diagnosed me with degenerative disc disease. I had to go on medical leave and now am on SSDI. My doctor prescribed me opiates in the fall of 2006 and I’ve been on them ever since. I have to say, I totally AGREE with you. I don’t know how I would be able to manage without these medications. At least I’m able to clean my house now and now without being in horrible pain. I don’t know what I would do if suddenly I was told I could no longer be prescribed opiates.
Who is someone that will champion those of us who legitametly need these medications? Do we write to our senators?? I sure hope Trump takes into consideration our cases before kicking us all to the curb!

This person (and others) make the valid point that they are taking pain medication for a reason, and that they were in despair before they got hooked on opioids, not after. Unfortunately for these commenters, we now have fairly good evidence that opioids are not the best treatment for chronic pain and that they are very, very dangerous, but regardless of whether this treatment is exactly the best one for these patients they make the valid point that it is the treatment they got and it works for them. To use an Americanism, you can take the opioids from their cold dead hands. In stark contrast to other countries, a very large proportion of America’s opioid deaths are due to prescription drugs, not heroin, reflecting an epidemic of overdose due to legally accessible painkillers. It’s my suspicion that these painkillers were prescribed to people like the above commenter because they could not afford the treatment for the underlying cause of their pain, because America’s healthcare system sucks, and these people then became addicted to a very dangerous substance – but in the absence of proper health insurance these people cannot get the specialist opioid management they deserve. America’s opioid epidemic is a consequence of poor health system access, not “despair”, and if Americans had the same health system as, say, Frenchies or Britons they would not be taking these drugs for more than 6 months, because the underlying cause of their condition would have been treated – and for that small minority of pain patients with chronic pain, in any other rich country they would have regular affordable access to a specialist who could calibrate their dose and manage their risks.

The opioid death problem in America is a problem of access to healthcare, which should have been fixed by Obamacare. Which brings us to the last issue …

There is no theory linking opioid addiction to voting Trump

What exactly is the theory by which people hooked on oxycontin are more likely to vote Trump? On its face there are only two realistic explanations for this theory: 1) the areas where oxycontin is a huge problem are facing social devastation with no solution in sight, so vote for change (even Trump!) in hopes of a solution; or 2) people who use drugs are arseholes and losers. Putting aside the obvious ecological fallacy in Monnat’s study (it could be that everyone in the area who votes for Trump is a non-opiate user, and they voted Trump in hopes of getting the druggies killed Duterte-style, but the data doesn’t tell us who voted Trump, just what proportion of each area did), there are big problems with these two explanations even at the individual level. Let’s deal with each in turn.

If areas facing social devastation due to oxycontin are more likely to vote Trump, why didn’t they also vote Romney? Some of these areas were stronger Obama voters in 2012, according to Monnat’s data, but opioid use has been skyrocketing in these areas since 2006 (remember Monnat used averages from 2006-2014). The mortality data covers two election cycles where they voted Obama even though opioid deaths were rising, and suddenly they voted Trump? Why now? Why Trump and not Romney, or McCain? It’s as if there is something else about Trump …

Of course it’s possible that oxycontin users are racist arseholes – I have certainly seen this in my time working in clinics providing healthcare to injecting drug users – but even if we accept such a bleak view of drug users (and it’s not true!) the problem with this theory is that even as opioid use increases, it remains a tiny proportion of the total population of these areas. The opioid users directly cannot swing the election – it has to be their neighbours, friends and family. Now it’s possible that a high prevalence of opioid use and suicide drives people seeing this phenomenon to vote Trump but this is a strange outcome – in general people vote for Democrats/Labour in times of social catastrophe, which is why they voted Obama to start with – because he promised to fix the financial crisis and health care. There has to be some other explanation for why non-opioid using people switched vote in droves to Trump but not Romney. I wonder what it could be?

American liberals’ desperate desire to believe their country is not deeply racist

The problem is, of course, that Trump had a single distinguishing feature that no one before him in the GOP had – he was uniquely, floridly racist. Since the election this has become abundantly clear, but for Donnat writing in late 2016 I guess it still seemed vaguely plausibly deniable. But the reality is that his single distinction from all other GOP candidates was his florid racism. Lots of people in America want to believe that the country they live in – the country that just 150 years ago went to war over slavery, and just 50 years ago had explicit laws to drive black people out of the economic life of the nation – is not racist. I have even recently seen news reports that America is “losing its leadership in the movement for racial equality.” No, dudes, you never showed any leadership on that front. America is a deeply racist nation. It’s racist in a way that other countries can’t even begin to understand. The reason Trump won is that he energized a racist base, and the reason his approval remains greater than 30% despite the shitshow he is presiding over is that a large number of Americans are out-and-out fascists, for whom trolling “liberals” and crushing non-whites is a good thing. That’s why rural, gun-owning Americans voted for Trump, and if the data were analyzed properly that fact would be very clear. Lots of people in America want to believe second- or third-order causes like the rustbelt or opioids, but the reality is staring them in the face: it’s racism. Don’t blame people with chronic pain, blame people with chronic racism. And fix it, before the entire world has to pay for the vainglorious passions of a narrow swathe of white America.


fn1: I refuse to take the American use of “blue” and “red” seriously – they get scare quotes until they decide that Republicans are blue and Democrats are red. Sorry, but you guys need to sort your shit out. Get proper political colours and get rid of American Football, then you’ll be taken seriously on the world stage. Also learn to spell color with a “u”.

fn2: I’m joshing you here. Everyone knows that Republicans don’t give a flying fuck if an electorate is dying of opioid overdoses at a skyrocketing rate, and everyone knows that the idea that Republicans would offer people dying of “deaths of despair” any policy solutions to their problem except “be born rich” is a hilarious joke. The only possible policy intervention that could have helped counties seeing an increasing opioid death rate was Obamacare’s Medicaid expansion, and we know republicans rejected that in states they controlled because they’re evil.

fn3: Well, there might be, but no one has shown it with a robust method.

fn4: I’m such a cynic about everything American that I really hope this commenter isn’t a drug company plant…

Next Page »