Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When in the face of empirical evidence I adjusted my world view from close to yours to where I am now. Based on your other comment you’ve having people do fuzzbizz, simply split things into two samples and see which they do better with. Interactive face to face help, or an open IDE they can run code on uninterrupted.

If the second group preformed better then presumably the testing methodology played a role with a higher percentage of people being able to actually program than your suggesting.

Granted it takes a lot of interviews to build a reasonable sample, but we aren’t talking about 1% better performance here.



>Interactive face to face help, or an open IDE they can run code on uninterrupted.

I've tried both. I've tried leaving them to it entirely, I've tried leaving them to it then if they get stuck helping both from a lesser to a greater degree.

It really makes no difference - the bad coders struggle on that and other problems, they just lack the ability to think through the problem.

And again I've seen enough anxious candidates to see the difference. I'm pretty good at gauging things well as to how and when to help.

Overall I'm not hearing many specifics more 'I am great at interviewing'.

The empirical evidence is overwhelmingly that there are many bad programmers out there with long CVs. That's also my experience WITHIN companies, though a lesser proportion.


Fair enough, I can accept it could be a difference in candidate pools or something. That said, I personally saw a bump from about 40% to 90% using different methods for what it’s worth.

In terms of a test I use mean, median, mode, and range for an array. Then go into basic performance trade offs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: