P-values get a large share of the blame for the replication crisis in science. People take for granted that the tests they use work without justifying the leap from data to model. Often, reported p-values are erroneous because the underlying model doesn’t accurately describe the way the data arose. I gave three examples of hypothesis tests I’ve developed where standard methods of analysis have failed: testing the adequacy of pseudo-random number generators for statistical simulations, gender bias in student evaluations of teaching, and risk-limiting election auditing.
You May Also Enjoy
6 minute read
If you follow me on social media, you might’ve seen that I’ve been traveling a ton this past year, and most of it has been related to my grad school work. In my five years as a PhD student, I’ve visited five states and five countries for conferences and other events. As someone who didn’t travel much as a kid, I’ve been loving these opportunities!
less than 1 minute read
I gave a lightning talk at the SF R Ladies meet-up about a problem with R’s sampling algorithm. Check out my slides here!
7 minute read
Last week, I attended my first voting conference: E-VOTE-ID. I’ve presented at statistics conferences before but never an interdisciplinary one like E-VOTE-ID. It brought together people working on electronic voting issues from a whole range of disciplines: legal studies, sociology, cryptography and security, voting systems developers, former election officials, and one statistician. This guy!
1 minute read
I gave a talk about a short book I’m writing at the 4th Conference of the International Society of Nonparametric Statistics. Please check out my slides!