>Also, I don't know of any other topic area where I would look at a resource that describes fundamental building blocks in an instructive way in 2014 and say "don't read this, it's irrelevant 2-3 years later". For languages/libraries/frameworks, sure. But for basic theory?
Yeah. A lot of deep learning papers boil down to "we tried to use X architecture on Y dataset, and it seems to produce small error rate". I don't know of any other area of computer science where getting results from an algorithm without explaining how those results occur is publishable.
Yeah. A lot of deep learning papers boil down to "we tried to use X architecture on Y dataset, and it seems to produce small error rate". I don't know of any other area of computer science where getting results from an algorithm without explaining how those results occur is publishable.