Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The standard solution in ML for this is cross entropy loss. You can think about it like this: from the probability estimate you derive two codewords, one saying it will happen and one say it won't happen. Then when the event happens (or doesn't) you write down the corresponding code in your log file. Then after a large number of predictions you can compare the size of people's log files. Someone having a shorter logfile means they used a more efficient encoding which means their probability estimates were better.

But yeah it's not possible to do this after just one event, you need some track record to be able to say that someone is statistically significantly better.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: