XClose

Statistical Science

Home
Menu

Transcript: Episode 2

Nathan Green  0:30  
Hello, you're listening to the Sample Space podcast, brought to you by the Department of Statistical Science here at University College London. My name is Dr. Nathan Green from the department. And I'm very pleased today to be joined by Dr. Anna Heath, who will be talking about her work with value of information. Hi, Anna.

Anna Heath  0:54  
Hi, thanks for having me.

Nathan Green  0:56  
Can you briefly introduce yourself, please?

Anna Heath  1:00  
Yes, of course. My name is Dr. Anna Heath. I'm a scientist at the Hospital for Sick Children in Toronto, and an assistant professor in the division of Biostatistics at the University of Toronto. My research primarily focuses on novel statistical methodology for trial design, particularly clinical trials. And I work a lot in the intersection between economic decision making and how you can use that to design and analyse clinical trials.

Nathan Green  1:28  
So what is value of information?

Anna Heath  1:31  
Value information is a method for kind of using health economic decision making and in particular decision theoretic ideas. And applying that to the design of research and the prioritisation of different research strategies. Decision theoretic is a branch of statistics that is focused on kind of determining the best method or kind of putting together a framework for making decisions, or in particular making the kind of optimal decision under uncertainty. So typically, it's a branch of kind of Bayesian statistical methods, which looks at kind of averaging or kind of, you know, averaging over the uncertainty in the parameter estimates and trying to come up with the optimal decision based on that kind of level of uncertainty characterising probability distributions. And some kind of utility function or kind of calculation that helps us compare the value of the different decision options that you might have in your decision problem.

Nathan Green  2:33  
Right. I think, what we're saying is, it's time to come up with some sort of like, rational, sensible decision, trying to sort of justify the decisions that we want to make.

Anna Heath  2:44  
Exactly is the theory of rational decision making under uncertainty.

Nathan Green  2:49  
Yeah. Okay. Cool. I mean, I think you touched on it. So why would we use that? You mentioned uncertainty a lot?

Anna Heath  2:56  
Yeah. So in particular, in health economics, we often are trying to make decisions between different healthcare interventions, in terms of which one will be best for implementation in the kind of wider healthcare system. And when we want to make those decisions, we often need to take into account a lot of different considerations. So not just the kind of clinical effectiveness of the treatment, but the costs, the long term consequences, the quality of life given by those treatments. And often all of those inputs are known with relative levels of certainty. So, for example, you might have relatively good data on the clinical effectiveness from a previous clinical trial, but maybe the long term consequences, you know, the lifelong implications of those treatments, we would have much more uncertainty about. So what we try to do is characterise that uncertainty in some way and use these decision models to understand which is the optimal treatment based on what we currently know about the different available treatment options in that disease area.

Nathan Green  4:01  
Whose optimal decision is it? So who decides what the utilities are? Who decides what's good for people or what level of other decisions being made up?

Anna Heath  4:15  
Typically, in the current kind of use of these methods, we consider a population like a government level policymaker, essentially. So in the UK, we take the National Institute of Health and Care Excellence has a framework that kind of takes into account this public health level decision. But actually, theoretically, the decision could be any decision maker so you could do an individual level decision model which would take into account the personal decisions, but you know, the complication of that is probably a little bit too high and which is why they're typically made at the population or the policy level.

Nathan Green  4:56  
Yeah, I suppose that's where the first budgeting decisions made? Yeah. Okay, cool. So that sounds I mean, I'm convinced that sounds useful. And sounds like some cool maths, but how would you actually go about calculating it?

Anna Heath  5:12  
When we move more into the kind of typical value of information framework, we tend to focus on three key measures, which are known by acronyms that do have words, but actually are not very descriptive acronyms, and the complexity of the analysis, and the calculation tends to get more complicated as the measures become more relevant to research design. So the first measure that we usually calculate is something called the EVPI, which calculates the value of resolving all the uncertainty in your model. So essentially learning the exact optimal treatment because you have no further statistical uncertainty in your model. And that calculation is actually pretty, relatively easy. So we have to do it by simulation. But the typical simulation that you would do to kind of understand the impact of uncertainty in a decision model is to simulate values of the parameters from all your parameter distributions, and then feed that through the model in a process known as probabilistic analysis. And that probabilistic analysis basically characterises the distribution of your utility functions for each of your treatment options. Once you have those distributions of the utilities, you can actually calculate the EVPI very easily or relatively easily by taking what I call like the row wise maximums. So basically, if you line up your vectors of utilities, simulated utilities, and you just calculate the maximum utility at each row of that kind of matrix, and then you calculate the average of those maximums, and then you need to minus the maximum average utility. So you then calculate the average utility across the columns, and then you take the maximum. So once you've done a kind of standard probabilistic analysis, you can calculate the EVPI very easily. The next two measures. One is the next one is known as the EVPPI, which is the value of learning the exact value of a specific model parameter or a specific subset model parameters. So what that can be used for is to target research towards a specific outcome of interest. So maybe you can discover that you need to learn much more about the prevalence of this disease in the population. And actually, the clinical effectiveness that you've learned from the clinical trial is not so relevant to kind of supporting our decision making. So it's a kind of a research prioritisation framework, and that, in the past required very complex simulation methods. But recently, relatively recently, in 2014, I believe there was a novel method developed that essentially fits a regression between the utility values on the parameter of interest. And the fitted values from those regressions can be used to calculate the EVPPI. So rather than having to kind of do this really complex simulation process, we just use regression between the things, the output that we already have from the probabilistic analysis. And then the final measure, which is the VSI is the value of running a specific research study. So you could say, I'm going to run this clinical trial, what's the value of running that trial, and there are actually several different methods on how to calculate those. And they each have kind of different pros and cons, depending on the complexity of the model, the complexity of the study that you're actually designing, so how many kind of outcomes you would have in your study. And they kind of range in complexity, computational time. And actually, this is one of the kind of key areas of research that we're kind of working on right now is to make those methods easier to implement, and easier to use and easier to kind of work out which one's most relevant to your problem.

Nathan Green  9:01  
Right. So where are you at with that? I mean, who's using these methods?

Anna Heath  9:06  
So there's been a lot of, I mean, relatively a lot of uptake with the EVPI and the EVPPI. And in particular, there's an online interface known as SAVI which can be used to calculate the EVPPI using this regression based method very easily. And I think a lot of publications do now use that software and kind of include EVPPI methods as part of their kind of research or rotation paragraph at the end of the paper saying, oh, we should maybe focus research on x. With regards to the EVSI. It's definitely less commonly used, but it certainly is becoming something that people are looking to do. And personally, I have a couple of projects, implementing the VSI in practice using these novel methods to try and demonstrate how they can be used in particular clinical trial design.

Nathan Green  9:58  
Okay, so it It's kind of at the start of this adoption, do you think it's going to be used? Or first of all, do you think it's going to be used in context outside of medicine? And could it be used even before a research project in order to justify its funding, for example, like prior to the project,

Anna Heath  10:17  
I guess that's what I would like to start to see happening, that there'd be some justification of the value of research before you apply for funding. So huge sums of medical of funding is given over to medical research. And I think we, we should start to be thinking carefully about whether or extracting value for money from that research. That being said, the complexity prior to the research project is that you have to build this decision model. So the way that it's typically been implemented at the moment is when companies are applying for for research authorizations, or for authorization for their product, they have to build the decision model. So it's only really after that like, full clinical trial process that they're really applying for the funding. And that's when they're building their models. And so that's when you can say, Oh, well, maybe we should have gathered more evidence on x, which is, I guess, useful as a sensitivity analysis, but not useful as a research design or prioritisation setting. So what we'd be hoping to move into is something where that decision model building is actually done prior to the study as part of the pilot work, and then use to kind of help design the study and then reused again at the end of the study to make the decisions. So you still have to build the model. It's just where you build it in the product development like timeline.

Nathan Green  11:35  
You'd have to maybe you'd have to incentivize people to do that. And you'd have to give them like some free study money and build some to commit the time.

Anna Heath  11:44  
Yeah, I think, I guess away from this exact methodology, like complex trial designs and study designs are actually becoming more common, particularly in an era where we had to have very fast moving product development in COVID. And also, as a recognised as a recognition that small populations for different diseases are really struggling to get research done. And these kind of like challenges are leading to more complex designs. And because of that, I think there is more of an appetite to support trial design in the design phase, and just making sure that the design is relevant to the research question, because I think if you don't find that design phase properly, you actually do end up with the wrong studies being funded and actually wasting a lot of money. And I think people, there's a lot more recognition that funding studied design is actually a really key aspect of making sure that the research we do is brings value for money, whether that's in a formal valued information setting, or just in general.

Nathan Green  12:49  
Makes perfect sense to me. When you were talking, I realised that I was thinking, like before and getting at the end, but I suppose there's no reason you couldn't do this sort of odd kind of online, if you like, you could use it as a stopping rule, or as you use this as some sort of like decision during the project rather than just bookending it right?

Anna Heath  13:13  
Yeah, definitely, I kind of see it as part of a kind of a cycle of evidence collection and curation, where you would exactly do that at different kind of phases. Do you see that? You know, I guess in a traditional clinical or like drug development, where you have different phases of your trial, from maybe it's not worth building that complexity, I'd say the phase one where you don't even know whether the drug kind of has any signal for efficacy or, or any signal for safety. But I think between phase two and phase three trials, there's a really good kind of opportunity to build that kind of modelling. And at that point, using some of the phase two data, making sure that you know, you would get approval for your drug after the phase three trial if it's as effective as you hope. And I think that would also help kind of reduce the waste in phase three trials, because they are just so intensive, and you really would want to make sure you're focusing on informing the whole decision, rather than just the clinical kind of effectiveness decision.

Nathan Green  14:15  
It sounds like a great idea. I suppose some of my experience is there's a gap between great ideas and being taken up in practice. So what do you think you could do to to give yourself the best chance of being taken up?

Anna Heath  14:29  
A lot of what we try to do, I guess, is implemented in practice. So I think there's a tendency for methodology to be developed and the same people not to take that forward to practice. So I really like a lot of my projects are really trying to do that translational piece where we do actually implement them in practice. I guess there's a lot of advocacy and education as well. So we developed we have short courses available that we teach on and I'm working with, kind of, I guess, policymakers to work out where these methods could fit in. And I guess trying to just really be available and make these things kind of known about and then show how they're using develop software develop kind of educational tools, tutorials to help other people implement them. Yeah, I think just trying to get the word out there and and I have another project working with the Canadian agencies, drugs and technology and health, trying to work out where these could be implemented in potentially these expedited drug approvals. So where you have like promising results, and the drugs are being given approval before they've kind of collected that full clinical trial evidence, so maybe on these secondary objectives, and trying to use information as part of those formal kind of appraisals and asking for the evidence to be collected after the market authorizations been given. So also trying, I guess, to work within currently available funding mechanisms that kind of asked for evidence and things like that. So I guess, working in the applied and the math, and the theoretical space to try and bridge that gap.

Nathan Green  16:05  
Okay, cool. So that sounds quite positive, which then I'd like to say thank you for joining us. And thanks for spending the time to talk about this today.

Anna Heath  16:16  
Thank you for having me. It's been really great.

Unknown Speaker  16:19  
UCL minds brings together the knowledge, insights and ideas of our community through a wide range of events and activities that are open to everyone.