Transcript of video
Closer to the conclusion of our most interesting discussion, Professor Evans, another area of your expertise is discovering of scientific fraud and misconduct. And how can one actually detect scientific fraud and misconduct in clinical trials or in post-marketing and Drug Safety Analysis? Well, I think that you have to have a mindset that allows for the possibility, first of all, at the moment in many clinical trials, particularly those that are monitored by the FDA or regulatory authorities. There is careful monitoring of what goes on in those trials. Though the monitoring by visiting where the sites are collecting the data is not the most effective way of doing, usually, statistical analysis is usually used to determine where you should carry out the on-site monitoring. So that I think can be improved. So you need a mindset, you need analysis, you need to know what to look for in the data. There are patterns when people invent data that do not occur in real data. I wouldn’t really, in some senses, want to go through all the tricks of detecting fraud. Someone said to me that I should be very careful in explaining what I do to detect fraud because otherwise, people will find ways to get around it. Well, I’m not sure that I agree with that. I think it’s my job to invent new statistical methods to detect fraud and misconduct in trials. It’s actually easier to detect fraud in trials than it is in an observational studies or in post-marketing, Drug Safety Analysis. But a lot of the post-marketing studies are done in electronic health records that are used for clinical purposes. And it will rarely then be the data themselves that are fraudulent because doctors don’t write down fraudulent data for their patients on the whole or other health professionals recording the data. But it is the analysis of the data that might be deficient. And we do not, from my experience, see as much fraud in post-marketing Safety Analysis, as we see in academic trials, where the result of the trial gives glory to the investigator. And you need to be aware of the motives of people when they commit fraud. Many doctors participate in randomized trials that are funded by industry. And they like the money that comes from that. And so they may be tempted, and sometimes fall into the temptation to have shortcuts, or to invent data in order to be paid for that data in a trial. And I think we have pretty good ways of detecting when that occurs. We have less good ways of detecting it when observational studies are done badly, but there are possibilities of looking at that as well. Well, one of the fascinating papers that you published and I think it’s an open secret since it has been published is how you compare a trial of certain nutrition intervention for cardiovascular disease, and also medical intervention and showed that the analysis of the last digits in the data could really reveal whether there is some scientific misconduct happening in the analysis or not because of the non-random distribution. Could you please briefly maybe discuss that kind of approach as an illustration of one of many methods of your analysis that can discover this situations. If I were to ask all your audience to think of a number, a single number between zero and nine, and ask them to write it down now. And I was able to go and look at those results. I would not find an even distribution of the numbers between zero and nine. There would be, for example, very few zeros and relatively few nines; rather more sevens are soon as human beings start to invent numbers, they cannot invent them randomly unless they use a computer to do so. And if they use a computer to do so, then there are ways of detecting that. So when we end up with anything that is subjective, and it used to be particularly the case with blood pressures, or with heights and weights where somebody wrote down a number after making an examination of a patient, then you would find digit preference. And that wasn’t necessarily fraudulent. But if you are having to invent all your numbers for a randomized trial and write them down, the patterns that human beings have, in writing those numbers down, enable you to detect differences from what is likely to be real data. And so in the example, you found We had a real data trial and data where it was very clearly invented. And we could detect the difference between them. Because the human beings involved in inventing the data couldn’t do reproduce what was seen in the real world.