Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting how quickly support vector machines went from the hot new thing to classify images to an afterthought after deep learning started having great results.


Noticed that too. It feels it was just a few years and all of the sudden everything is "deep" now.

The same thing happened with data storage. As soon as big data appeared everyone stopped doing just data and started doing "big data". Now the term is kind of a joke even.

I predict in a few years "deep learning" term will become mostly used in an ironic sense as well.


> I predict in a few years "deep learning" term will become mostly used in an ironic sense as well.

I may be a bit behind the times, but I'm also mystified by "deep learning's" popularity. Both giant neural nets and kernel methods have overfitting problems: torture a billion-parameter model long enough, and it will tell you what you want to hear.

SVMs address this by finding a large margin for error, which will hopefully improve generalization. DNNs (I think) do this by throwing more ("big") data at the problem and hoping that the training set covers all possible inputs. Work on adversarial learning suggests that DNNs go completely off the rails when presented with anything slightly unexpected.


My other comment addresses some of this but you're overstating things a bit. Throwing more data at the model is one solution. Its just not the only, or even best, approach. Properly measured performance on good holdouts and the application of regularization avoids the worst of overfitting. This is standard practice is most of machine learning, not just deep learning.

Deep learning gets a lot of hype because for many applications they perform better and scale better without a lot of tricks and extensions which are now possible with SVMs. You can even use a large margin loss with deep models to get some of the benefits of SVMs.

Adversarial examples are way overblown. First, SVMs are not immune to them either. Second, very few applications are threatened by things like adversarial examples.


Empericially, CNNs generalize better on image recognition tasks than hand built features. This comment doesn't make much sense and is needlessly obtuse in the face of progress, tbh.


That outcome doesn't seem terribly likely. It's true that, like big data, deep learning is often misused. This is largely because it works well enough in the off-the-shelf case and it's "easier" due to tooling, transfer learning, and free educational materials for beginners. However, deep learning also obtains state-of-the-art in a number of tasks and domains when you know what you're doing.

I don't think your scenario is likely to occur unless something else starts outperforming deep learning (in the broadest sense) _and_ there's an approachable alternative to solve the same problems at least as well.


Hot new ?! For whom ?

SVMs have been in active use since the early nineties and have been formulated much before that


they were also all the rage in pretty much everything else as well. In the problems not pushed out by DNN, gradient boosting has pretty much replaced SVMs as GBMs are faster training and better accuracy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: