> If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem
That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.
And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.
I don't understand how you can consider the AI industry to be in any sense retreating from prior claims. The existential problem remains an active near-future risk; you're hearing a lot about the jobs problem because it's already here, now, today. Do you not remember how much less capable AI systems were in 2023, and how implausible it seemed that they could become as good as they are now without new theoretical breakthroughs?
You're hearing a lot about the jobs problem largely because the media's job is to scaremonger. There have been multiple studies that have concluded AI's impact on jobs/layoffs is negligible: https://budgetlab.yale.edu/research/evaluating-impact-ai-lab...
This has become some kind of truism that has little actual evidence. It was said it would happen in 2022, 2023, 2024, 2025, and is still being said but it isn't true. The models got better this year but they've always gotten better. And none of the recent improvements have been as dramatic as say GPT3 to GPT4. I feel crazy. People are saying the sky is falling but every researcher or person who analyzes these issues professionally says the opposite.
That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.
And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.