I created some 3D models of a real world building and surrounding environment using photogrammetry from 20+ megapixel DSLR photos and decided the accuracy was totally inadequate and the artefacts were too hard to manually clean up.
I then hired a dude with a LIDAR scanner and did it properly. The difference in quality/accuracy is like 120x80 ASF video files in 1996 compared to 4K footage today.
Anyone who thinks you can build "HD" virtual worlds using the crappy cameras on a Tesla needs their heads examining. Maybe with thousands of passes and some epic compute and signal processing, but why bother? Just LIDAR it.
My Tesla can't even decide if a traffic light is a single traffic light or not on a sunny summer's day from a distance of twenty feet. Almost every time it is either dark or humid or winter (road grime) it tells me one or more cameras are obscured. But only after I've already started driving, obviously. This supposedly cutting edge AI driving machine frequently thinks I'm leaving the carriageway on UK B-roads (it's almost dangerous) and is significantly less reliable at distance cruise control and lane-assist than my Skodă. (I presume VW just quietly bought a black box from Bosch or whoever to do this.)
Tesla are barking up the wrong tree IMO. At this point the camera-only stance feels like a religious thing, not based on sanity or the real practical world. I imagine that someone came to Elon and said "reconciling conflicting radar and camera signals is hard" and he applied his considerable genius and issued an edict to "let's not do that then!" like it would magically make all the actual hard problems go away.
Heck, Teslas can't even seem to reliably parallel park themselves, frequently getting stuck halfway, or hitting kerbs. If they can't solve that highly constrained problem, I'm hardly going to trust taking my eyes off it at 70mph.
I then hired a dude with a LIDAR scanner and did it properly. The difference in quality/accuracy is like 120x80 ASF video files in 1996 compared to 4K footage today.
Anyone who thinks you can build "HD" virtual worlds using the crappy cameras on a Tesla needs their heads examining. Maybe with thousands of passes and some epic compute and signal processing, but why bother? Just LIDAR it.
My Tesla can't even decide if a traffic light is a single traffic light or not on a sunny summer's day from a distance of twenty feet. Almost every time it is either dark or humid or winter (road grime) it tells me one or more cameras are obscured. But only after I've already started driving, obviously. This supposedly cutting edge AI driving machine frequently thinks I'm leaving the carriageway on UK B-roads (it's almost dangerous) and is significantly less reliable at distance cruise control and lane-assist than my Skodă. (I presume VW just quietly bought a black box from Bosch or whoever to do this.)
Tesla are barking up the wrong tree IMO. At this point the camera-only stance feels like a religious thing, not based on sanity or the real practical world. I imagine that someone came to Elon and said "reconciling conflicting radar and camera signals is hard" and he applied his considerable genius and issued an edict to "let's not do that then!" like it would magically make all the actual hard problems go away.
Heck, Teslas can't even seem to reliably parallel park themselves, frequently getting stuck halfway, or hitting kerbs. If they can't solve that highly constrained problem, I'm hardly going to trust taking my eyes off it at 70mph.