Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The FaceTime demo was interesting. Sure you can see you friends calls floating around the room...

But what do they see? And what happens when you call your friend who also owns a Vision Pro?



They just showed that people wearing a Vision Pro get a digitally mocked up version of themselves shown to others.


This seems like a step backwards from just seeing a real persons face on a regular screen :(


The demo for that looked pretty rough. The user's facial expressions looked unnatural, and the lip movements were poorly synced to the audio. I guess there's only so much an "encoder/decoder" model can make up with limited input data.


That explains why every other user was a floating head; that would let them generate the same.


wondering how they deal with the issue that the face starts moving before we start making sound. I guess the network latency will help.


They just went over it, looks like they're relying on a motion-tracked digital persona (probably using some sort of NeRF approach)


I guess a Metaverse nightmare cartoon avatar?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: