About a month ago, 60 Minutes ran a segment investigating the issue of so-called “mechanical doping” in cycling. The (admittedly silly) term refers to mechanical cheating, specifically, the surreptitious placement of motors in bicycles used in professional racing. I’ll get into the specifics in a moment, but will mention as a preface that the gist of the investigation is that hidden motors have allegedly been used in the highest levels of cycling.
If comment-section discussions are an indicator, it appears as though many people are convinced by these suggestions. The internet is rife with those who are not merely wary that motors may have been used in World Tour races, but are absolutely certain of it.
To me, this seems a rather unreasonable belief. In general I would be pretty reluctant to engage in any sort of “belief debunking” exercise, mainly because they are typically problematic. Almost weekly you will come across an article in the form of “Why don’t people believe in science (e.g. vaccines, modern medicine in general, climate change, the moon landing)?” And the answer invariably comes, “Because they’re uneducated and irrational.” Such pieces tend to confuse philosophy with psychology and sociology, and usually do a pretty bad job at both. Certainly people lack the requisite expertise to make sense of all sorts of complex theories, and they’re probably also irrational in many ways.
But the main mistake these debunkers make is assuming that people who “believe in science” do so because they are the opposite of those who don’t believe – they are educated and rational. Psychologically and sociologically speaking, most beliefs, whether deemed true or false, are not held because people have followed a set of rational procedures resembling something like the “scientific method.” Most people form their beliefs without much evidential support or critical analysis. The majority of people who think that it’s a good idea to get their kids vaccinated have never read a single scientific study on vaccines; they either trust their doctors and other experts, or are compelled by the fact that it is a widely held, conventional belief. They might bolster their belief with a bit of confirmation biasing internet searches. It also happens that it actually is a good idea to get your kids vaccinated, but most people don’t come anywhere close to empirically verifying this.
I’d argue that trusting experts is in itself a rational thing to do, but that’s a different kind of rationality than coming to a conclusion based on rigorous research of the available evidence and theoretical explanations for some phenomenon. It’s certainly more complex than applying a straightforward set of well-defined rational rules. And in appealing to these idealised rules, naturalistic explanations and normative evaluations of beliefs get confused and conflated. If the question is, “Why don’t people believe in X?” (or here, why do people believe in mechanical doping?) the best answer will be naturalistic – a psychological and sociological explanation. If the question is, “Why should people believe in X?” or “What’s the best way to form beliefs about X?” then the answer will be normative. The answers to the normative questions might be construed as an intervention to produce a different result to the naturalistic one – that is, if people were more rational – however rationality comes to be defined (and that’s a big however) – then they would believe in X more. This might be the case, but the size of the effect depends on the question at hand. Often, it can be extremely limited.
Did I need so many caveats? As usual, probably not. While I’m interested in both of these questions, I wanted to be clear that I am not conflating them. Moreover, in turning to the normative issue – how might a reasonable belief about mechanical doping be achieved? – I don’t want to pretend that some easy application of rational rules is the solution. There are legitimately unsettled philosophical questions about optimal forms of reasoning. And sociologically, even if these questions were settled, there still might be “good reasons” for disagreement based on contextual factors – especially regarding questions of trust.
This is not to say that people aren’t irrational. One can easily point to problematic instances of rationalisation where people engage in all kinds of logical fallacies, or confirmation bias and cherry-picking of data, or motivated reasoning, or fail to recognise issues with available evidence, or arrive at grossly over-extended conclusions. And these things probably go a long way in accounting for why people hold beliefs for which there is limited empirical support. But I’m not especially interested in these issues here. Rather, I want to address two underlying aspects of beliefs in general that account for differing interpretations of evidence: 1) beliefs are not statements of certainty, and 2) beliefs and evidence relate in more complex ways than most common conceptions of rationality suppose.
Since whether or not mechanical doping occurs in top-level bike racing (I’ll define this specifically as World Tour races) is ostensibly an empirical question, I guess I should start with outlining the available evidence. It amounts to:
- A furtive Hungarian inventor of a working hidden motor, about whom we know very little and whose story contains many gaps, claiming that he believes that it has been used in World Tour races
- The same inventor claiming that both motorised hubs and sophisticated magnetic wheels capable of propelling themselves exist in a form that could clandestinely used in races
- A paranoid Greg Lemond, claiming the same thing, based almost entirely on the claims of said suspicious Hungarian inventor
- An ex-French Anti-Doping Agency official claiming that be believes that it has happened based on the unsubstantiated testimony of unidentified “informants”
- Ex-Pro Tyler Hamilton saying he “could see how [it could happen]” based on using the motor provided by the Hungarian inventor
- A Youtube video showing Fabian Cancellara apparently attacking “the pack at unnatural speeds”
- A Youtube video showing Ryder Hesjedal’s wheel spinning around “apparently on its own”
- Team Sky time trial bikes at the Tour de France that allegedly weighed 800g more than other bikes (which is exactly the same amount of extra weight that a motorised hub would add to a regular wheel, as alleged by the Hungarian inventor)
- Female cyclocross racer in a UCI World Cup race was caught with a motorised bike (that, incidentally, was not used in the race)
- A report from an Italian news outlet that claims it has evidence, obtained from heat-sensing devices, that motors were used in Strade Bianche and the Coppi e Bartali
- A 60 Minutes episode that cites all of the above
As it stands, I’m unconvinced by this. The strongest bit of circumstantial evidence is the fact that an actual, working, motorised bike was found at women’s UCI World Cup race. But everything else is not only hearsay and circumstantial evidence, but it is hearsay and circumstantial evidence of remarkably poor quality.
The Hungarian inventor’s accusations are totally unsubstantiated. That he has a working motor doesn’t add anything to the fact that we already know motors are capable of being hidden in bikes. Even the crux of his story – that he sold the rights to his motor for a period of 10 years to an unknown buyer through a middleman in 1998 for $2 million – lacks any convincing physical evidence. He provides an unverified bank statement showing that he had approximately this amount of money at one point. But what does that show? The money could have been obtained in any number of ways.
And for $2 million, why would the terms of the agreement be for 10 years and not indefinitely? The terms of this deal swore him to both a strict confidentiality agreement and a complete cessation of any other work on motors. And all he has as proof of this transaction is a bank record? There were no lawyers or contracts involved in this agreement? And where is this friend who facilitated the transaction?
The news reports about motors in Strade Bianche, which should be damning, contain no real evidence. No riders, no bikes, no clear photographic or video evidence. Even the French anti-doping official’s claims – which appear to be those of a legitimate expert- are not merely hearsay, but hearsay of hearsay.
I don’t want to go through every single piece of alleged evidence and explain why I think it’s unconvincing. Instead, I’ll answer the following question: What would it take for me to be convinced that mechanical doping definitely occurred in World Tour races? To start, direct evidence. This would entail producing an actual bike with a motor in it and clear indication that it was used in race. An ideal scenario would be a bike involved in a crash in which the motor is exposed and captured by multiple recording devices and seen first hand by multiple witnesses. But if the UCI found a motor in one of its pre-and-post race motor checks, and published a clear record of the context in which it was detected and by what means, this would also suffice. I would be virtually certain.
I add the qualifier “virtually” to point out that in the second case, there is still room – though possibly negligible – for scepticism. The UCI could lie for some nefarious reason. To what end would require massive speculation. It should be noted that even the first case there is still room for scepticism. The entire scenario could have been staged – the video evidence faked and the eye-witness accounts fabricated. So the real ideal scenario would be me personally inspecting a bike and finding a motor in it. But even then, I could merely be in an elaborate simulation.
I mention these extreme forms of scepticism partly facetiously, but there’s both a philosophical and practical point here: beliefs are not matters of certainty (even the ones that seem to be certain!), but likelihood.
If you’ve never heard of Bayesian beliefs before, here’s the place where I tell you about them. The name refers to Thomas Bayes, and English statistician who developed a theory which expresses the probability of some phenomenon existing or occurring as a function of available evidence. Via this theorem, beliefs are seen as matters of degree, which should change in relation to the accumulation of evidence.
The key point here is that beliefs are not bivalent. The two options in the case of mechanical doping are not: it is absolutely certain that motors have been used in professional bike racing vs. motors have definitely never been used in professional bike racing. Rather, there is a range of likelihoods, and each degree can be supported by various interpretations of the evidence. In this case, the extreme ends seem rather untenable.
More recent work on Bayesian probability recognises that there is more to the formulation of probabilistic beliefs than the availability of evidence, as people will interpret available evidence differently. Moreover, there is a lot of research in psychology that shows that people often do not alter their beliefs when presented with new, conflicting evidence.
And if you’ve never heard of “confirmation holism” before, here’s the place where I tell you about that. This term comes from a philosopher with the illustrious name of Willard Van Orman Quine, who argued that beliefs are not the kinds of things that stand alone and can be justified by applying a clear set of logical procedures – either by lining up corresponding empirical evidence by which to verify one’s belief, or by rejecting beliefs by identifying an observation that shows it to be false. Rather, individual beliefs are parts of complex webs and depend on a multitude of assumptions and other beliefs, in conjunction with empirical evidence.
The reason people don’t change their minds when they receive new evidence is because they can make adjustments to the web that accommodates this information in such a way that allows their overall belief to remain unchanged.
Here’s an example: Let’s say you were a scientist conducting an experiment on, I don’t know, the ergogenic effects of EPO, and you ran a series of tests on athletes that showed no performance enhancing effect of EPO supplementation. You could either believe that all previous studies had made errors, or that there was something wrong with your results. The latter seems more likely, so you would adjust some part of your theoretical framework – maybe by dropping your previous assumption that your methods of data collection were sound. For example, the conflicting results could be explained if there was a systematic problem with the process by which you measure hematocrit.
In other cases, it might be that a person’s web of belief is constituted in such a way that they don’t even need to make an adjustment; it is already primed to reject recalcitrant information. For example, let’s say you were a Donald Trump supporter and clear evidence emerged that showed that Donald Trump had been colluding with Putin during his campaign, and that the Russians intervened directly in the election. My qualifying “clear” evidence probably would not be recognised as such by Trump supporters. Part of their conceptual framework probably includes the assumption that the media regularly lies and that journalists are not trustworthy sources. So new, conflicting evidence is easily subsumed by their web of beliefs.
In the case of mechanical doping, the many people who seem absolutely certain of it might have some part of their web of belief that makes this Hungarian inventor seem trustworthy, while I find him suspect. Or maybe they are primed to believe that all forms of cheating are more likely than I think they are.
Is it possible that motors have been used in World Tour races? Yes. Thus, the chances that it has occurred is significantly higher than zero, since we have direct evidence that it is possible. Working motors exist and can be surreptitiously placed in bikes. But what is a reasonable belief about how likely this is? For what it’s worth, I think it’s less-than-likely that it has happened in World Tour races; if I had to quantify it, I’d say there’s a 30% chance. What this means is that I wouldn’t find it egregious if someone thought that the chances were equivocal, or even slightly more-than-likely. But very likely? Certain? That seems excessive.
What these qualifiers indicate is that beliefs do not necessarily indicate what people think they do. They are not statements of certainty. Indeed, for all intents and purposes, they might approach the limit of certainty, but they can never reach it.
So this is the main normative point I want to make: Debates might be less acrimonious and polarising – even stupid debates about bike racing – if this basic premise was recognised. People treat most disagreements as dichotomies – between truth and falsity, right and wrong – when they are really about relative likeliness. People say “indisputable” when, if pressed, they might concede they really mean, “more than likely.” Disagreements are often marginal, rather than starkly divisive.
There remains open questions here about the limits of scepticism. When do you cross the threshold from saying “I don’t believe” to “I believe”? When the probability is higher than 50%? 75%? 99%? Or is this just a semantic problem? I could say, “I believe there is a 30% chance that motors have been used in World Tour races,” or I could say, “I’m not sure that motors have been used in World Tour races,” and mean similar things. But what about, “I don’t believe that motors have been used in World Tour races”? What does that mean? Is that the kind of statement that should be avoided? I don’t believe I have an answer to that.
One more caveat to end things off. Is there a less trivial issue than mechanical doping to which I could apply these reflections? Yes.