Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is one reason for going to vision only automatic driving systems like Tesla is doing. The system is more likely to fail when a human would also of had a difficult time. Heavy snow, sun blind out, etc. Strange failures due to radar, lidar, and other sensors with not be understood or accepted.


That would be a reasonable argument only if Tesla's image processing were as good as a human brain (it's not) and if their cameras were reasonably comparable to human eyes (they're not). To take the argument to an absurd extreme, you can't cover a car in 240p webcams from 2003 and expect good driving performance.

Moreover, other players also have cars covered in cameras. If anyone thought vision only was the best path forward because they couldn't figure out sensor fusion, they'd already have done it and saved the BOM cost.

For what it's worth, my experience has been that cameras are one of the more problematic sensor systems overall. Vendor software is garbage, any particular tuning is finicky in extreme conditions, you have to clean the damn things, cameras streams take up lots of bandwidth, etc.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: