Sunday, July 16, 2017

Statistics Sunday: No Really, What's Bayes' Got to Do With It?

Note: This is the second of a two-part post. Check out part 1 here.

Yesterday, I discussed what Bayes' theorem can tell us about our actual Type I error rate. Today, I'll demonstrate how to apply Bayes' theorem to research findings.

As a reminder, Bayes' theorem looks like this:


But since we want to use it to generate false positive rates, we need to rethink the equation just a bit. We want to test the probability a result is not true given it is significant - P(Tc|S). So we're essentially testing for Not A instead of A. This means that for any A in the original equation, we need to sub in Ac, and any Ac becomes A.

After that, most of the values we can pretty easily determine. P(S|T) is equal to power, for which the convention is 0.8. Occasionally, people may power their study to 0.9. We'll try both but focus mainly on the 0.8 results.

P(S|Tc) is equal to alpha. Convention is 0.05, but you might also see 0.1 or 0.01. We'll test all three but focus mainly on the 0.05 results.

The unknowns are P(T) and P(Tc). These refer to the probability of whether a hypothesis is true or not, separate from research findings. We might be inclined to use 0.5, since there are two options, True or Not True, but it's likely smaller than that. It's more likely that a broad hypothesis is true in a small set of circumstances, and of course, there could be a better explanation (a better hypothesis) out there somewhere we just haven't found yet. When a value you need isn't known, the best thing to do is test a range of values to see how that affects the results - let's do increments of 0.05, from 0.05 to 0.95.

ptrue<-c(.05,.10,.15,.20,.25,.30,.35,.40,.45,.50,.55,.60,.65,.70,
.75,.80,.85,.90,.95)
pnottrue<-1-ptrue

Cool. Now let's set up our P(S|T) and P(S|Tc) values:

psig_given_true1<-0.80
psig_given_true2<-0.90
psig_given_nottrue1<-0.05
psig_given_nottrue2<-0.01
psig_given_nottrue3<-0.10

So let's run Bayes' theorem and find out what our false positive rates are at the conventional alpha value of 0.05, and powers of 0.80 and 0.90.

bayes11<-(psig_given_nottrue1*pnottrue)/((psig_given_nottrue1*
pnottrue)+(psig_given_true1*ptrue))
bayes12<-(psig_given_nottrue1*pnottrue)/((psig_given_nottrue1*
pnottrue)+(psig_given_true2*ptrue))

Let's plot those results:

plot(bayes11, type="o", col="red", xlab="P(T)", 
ylab="False Positive Rate", main="False Positive 
Rates for Alpha of 0.05", labels=FALSE, ylim=c(0,1))
lines(bayes12, type="o", col="blue")
axis(1, at=c(1:19), labels=ptrue)
axis(2, at=c(0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1.0))
legend(12,.8, c("Power=0.9","Power=0.8"),lty=c(1,1), 
lwd=c(2.5,2.5), col=c("blue","red"))


For conventional power of 0.80 and conventional alpha of 0.05, you can see false positive rates higher than 50% when the probability of a true result is low. Results are only slightly better for power of 0.90. And if I started testing different values of power - using what people often have as their actual power, rather than the convention, the false positive rates will likely skyrocket. As I mentioned in Wednesday's Statistical Sins post, low power can lead to erroneous significant results just as easily as erroneous non-significant results. When you play the game of probability, your study either wins or it dies.

I sincerely apologize for that bastardized Game of Thrones quote.

So just for fun, let's also compute our false positive rates for two different alphas - 0.01, which is often used in studies where false positive rates can be very very bad, such as in drug trials; and 0.10, which is not usually used as an alpha, but is often used as an indicator of "marginally" significant results. First, alpha = 0.01:

bayes21<-(psig_given_nottrue2*pnottrue)/((psig_given_nottrue2*
pnottrue)+(psig_given_true1*ptrue))
bayes22<-(psig_given_nottrue2*pnottrue)/((psig_given_nottrue2*
pnottrue)+(psig_given_true2*ptrue))

Let's plot those results:

plot(bayes21, type="o", col="red", xlab="P(T)", 
ylab="False Positive Rate", main="False Positive 
Rates for Alpha of 0.01", labels=FALSE, ylim=c(0,1))
lines(bayes22, type="o", col="blue")
axis(1, at=c(1:19), labels=ptrue)
axis(2, at=c(0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1.0))
legend(12,.8, c("Power=0.9","Power=0.8"),lty=c(1,1),
lwd=c(2.5,2.5), col=c("blue","red"))


And finally, alpha of 0.10:

bayes31<-(psig_given_nottrue3*pnottrue)/((psig_given_nottrue3*
pnottrue)+(psig_given_true1*ptrue))
bayes32<-(psig_given_nottrue3*pnottrue)/((psig_given_nottrue3*
pnottrue)+(psig_given_true2*ptrue))

Let's plot those results:

plot(bayes31, type="o", col="red", xlab="P(T)", 
ylab="False Positive Rate", main="False Positive 
Rates for Alpha of 0.10", labels=FALSE, ylim=c(0,1))
lines(bayes32, type="o", col="blue")
axis(1, at=c(1:19), labels=ptrue)
axis(2, at=c(0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1.0))
legend(12,.8, c("Power=0.9","Power=0.8"),lty=c(1,1), 
lwd=c(2.5,2.5), col=c("blue","red"))


There you have it. Thanks to Bayes' theorem, we know that our actual Type I error rate can be much higher than 0.05, or whatever value we use for alpha.

No comments:

Post a Comment