It downloaded itself on my phone as well. I thought it was some quirk with the Apple Watch sync because I used to have headspace installed at some point and that automatically shows up on the Apple Watch but deleting an app on the iPhone doesn’t always delete the corresponding Apple Watch app. So if you open headspace on the Apple Watch I assumed it redownloaded itself on the iPhone.
same. i get blasted with ads for this app on whatever platform, never installed it myself. the amount of promotions + this = my underdeveloped brain is so ready to assume the worst here. been a while since i used my pitchfork & i'm here for the riot.
if it is, in fact, something nefarious at play that would be a pretty crazy 2026 era exploit. but i'm certain it's a bug/artifact of some sort that, for whatever reason, affects this specific app.
Maybe the developer was using Headspace as part of the test data and it bled into production?
It's hard to imagine what Headspace would like to achieve if this were an exploit executed by them. It's so salient, that it makes no sense to do on purpose. At least some portion of Apple employees and their families are going to be affected by this, and this would escalate to the legal department immediately.
when "explaining a thing, no more assumptions should be made than are necessary."
could be an ios bug; a bug with the notification library they use, any other app behaving similarly?
considering the possibility this was on purpose, they would risk getting banned from the appstore. no, they are not big enough to avoid that. so it's unlikely this was intentional.
I feel like their are (at least) three main critiques of AI, and I wish we could debate them separately, because I think they each have different resolutions.
The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
The other two critiques are trickier. The first is the environmental impact of AI, and the response is difficult. Doing work to make it more efficient, and continuing to develop cleaner energy sources is paramount. Taxing and efficiency requirements might be a start. We have the technology to produce energy in sustainable ways, but it is expensive. It has to be non-negotiable if massive energy usage for AI is to continue.
The last is the REAL conversation, and I don’t know the answer. How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
I guess there is another issue, related to the last one, which is how do we deal with the ability to use AI to mislead and commit fraud at scale. How do we deal with not being able to trust what actually said/done by a human and what is AI pretending to be human? How do we avoid and mitigate the ability for AI to generate a massive amount of custom content that is used to mislead and defraud people? So much of our current mitigation strategy relies on the assumption that it takes a lot of effort and time to do certain things that can now be done instantly thousands of times?
>The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society.
This was the argument about robots. It did not pan out. No taxes materialized. Robots and Automated Machines have not shared productivity. In fact, things like self-checkout has spread the labor load to the customer, instead of the company.
>We have the technology to produce energy in sustainable ways, but it is expensive
AI Datacenters should be completely sustainably self-powered. Full stop. We did not spend decades bringing down the cost of power only to have it all hoovered up by robber barons who "need" it to be the first immortal AI God. We did not install water treatment plants to bring down our water usage rates just to feed the machine spirit.
>How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
Someone said it as a joke, but I want AI to be doing my dishes and sorting my laundry while I write books and compose music. I don't want AI writing books and composing music so I have more time to do my dishes and sort my laundry.
If you lost your $60,000 a year job due to this, do you really believe a basic income funded by it will make up that loss? It won't. Basic income in the US is usually proposed at $12k per year, which would add another $3 trillion to the budget. Do you think you can even get that just taxing these companies? I don't.
People who bring up basic income need to get serious about the numbers involved because I never see it. It's not a realistic solution.
People complain UBI doesn’t make mathematical sense doesn’t realize our current economy doesn’t make mathematical sense either. All this prosperity we in the developed world get comes at the cost of extracting wealth from the rest of the world and all government taking on ever more debt.
The modern (social or economic) history of China, Europe, Russia, UK, US are all good case studies. In aggregate, I think they underscore the reality of the system. Every year we now have high profile people coming out of the system screaming about how insane it is: bankers, traders, politicians, military intelligence. If you had to boil it down to a single book debunking late 20th century pax Americana international macro-economics, it's hard to go past Confessions of an Economic Hitman, although not written formally. I've personally had chapter one verified by an Indonesian diplomat. Alternatively, take the quippy summary of a world-recognized capitalist, George Soros: Classical economics is based on a false analogy with Newtonian physics.
Fair warning: I’m quite ignorant in terms of economics, so this is a naïve way of looking at it.
The question that always pops up for me when it comes to UBI applied to the current capitalist system: even if you did actually come up with the money somehow (which is a pretty huge if as you say), once everyone has X “base money” per month, doesn’t that mean the cost of living (specifically renting) will rise to match this new “base”?
The cost of living would certainly rise somewhat but the point is that UBI is redistributive: the same absolute amount to everyone raises low incomes by a larger percentage than high incomes. Long term effects are hard to predict but in the short term it would mean the poor doing slightly better while the middle class is slightly worse off. The non-working (owning) class would be mostly unaffected as assets are insulated from inflation.
Another factor to consider is that putting more money in the hands of people in need of <thing> means producing <thing> becomes more profitable and thus more investment and resources are directed towards <thing>. If we assume the economy works the way the proponents of capitalism say it does, this should eventually drive the cost of living back down.
But personally I think the biggest benefit of UBI would be the reduction in number of people who are desperate enough to accept work – both legal and illegal – that is unfairly compensated, inhumane and/or immoral. The existence of that class of people is the driving force behind many societal problems. Exorbitant amounts of resources are wasted treating the symptoms of those problems instead of fixing the root cause.
I mean the numbers. 12k per year is peanuts. You cannot live off that and to do it we'd be nearly doubling the budget (that's old data, it's probably not that portion of the budget anymore).
That 12k doesn't include healthcare, it doesn't include a lot of things. It's basically ensuring that people live well below poverty level, and for what? I just don't get how the numbers work, even if it was politically feasible.
I'd much rather have free healthcare and other amenities other countries have. Here in the US if you lose your job there is virtually nothing between you and the streets besides family and friends.
I'm facing this right now. I cannot get a job in tech which means restarting my career. Getting a job right now is not easy in any field especially not in anything like a living wage. If I did not have my parents I would be on the streets right now, thankfully I don't have a mortgage or anything like that. I'm not sure how much $12k per year would really help, it certainly wouldn't pay for housing.
And even if you did get the 60k and never can find work again are you gonna be happy about the next door neighbor working for 120k and getting his 60k on top?
All the proposals I’ve seen would set the marginal tax rate on the 120 so high that his earnings would end up more like 40k from the 120k job and then he gets his 60. So, still some benefit to working, but a very progressive tax rate on higher earnings. Not sure I agree with this, but that is what I’ve seen.
Your neighbor would get $60K UBI but their tax bill would go up by $80K because the government needs tax revenue to pay the UBI.
For high levels of UBI it’s not possible to get all of the necessary tax revenue from taxing billionaires or corporations or other simplistic ideas that sound good unless you do math.
> do you really believe a basic income funded by it will make up that loss? It won't.
Almost definitionally it would. If society is saving a bunch of money on all that saved labor, that extra value is still there, it just needs to be appropriately redistributed
This is one of the most horrifying comments I've ever read on this website. It's practically a dare to engage in civil war or violent revolution. People fundamentally experience life as relative - as changes. You can't "deprogram" intrinsic human nature. You can just wait 80 years for everybody who's not used to the new hell to die.
24k puts you near poverty level. $1k per month will cover food expenses, it won't cover transport, shelter, and certainly not medical. On 12k per year you have enough money for food and praying that an emergency doesn't happen. It's hard enough living on 40k, and I'm not even in a place where costs are expensive.
UBI will never happen in the US so it's a pointless argument. Americans will have plenty of pawn shops and short-term loan services to help them, though.
It is kinda funny to see you guys petrify at the thought of people living in poverty, pretend you care, and then use us as a political foil in your useless debates.
How is not wanting to live in poverty using the poor as a foil? How is it hypocritical/fake to care about people who are in situations that I don't want to be in? Isn't that just logical?
> $12k a year is plenty. You’ve just been raised above your natural standard
I get where you're coming from. But this is politically unworkable, and for good reason. If AI increases productivity, that means more wealth, which means living standards should go up.
> $12k a year is plenty. You’ve just been raised above your natural standard
> I get where you're coming from.
You do? Have you priced out health insurance lately? I have. Insurance on HealthCare.gov for my partner and I would be $1700/month for what amounts to catastrophic coverage. It had around a $20k deductible and covered nothing other than an annual physical prior to hitting the deductible.
With $2k/month to work with between us, I guess we have to somehow find a place to live and eat on the remaining $300 as we pay for our functionally worthless health insurance since there is no way in hell we could afford to pay the deductible.
Their numbers are wrong. But their fundamental argument, I believe, is degrowth. That we are living beyond our means and need to lower our expectations of living standards to live sustainability. It's a philosophically-appealing argument. It's also wrong, unless you're comfortable with the inevitable violence and likely population destruction that would need to ensue from an honest degrowth agenda.
Just as hyperloop was designed as a techbro pie in the sky notion to kill high speed rail, basic income as an idea is designed to kill more realistic attempts to shore up welfare, e.g.
* A job guarantee like we had during the great depression
* Lowering retirement age
* Raise minimum wage
* Expanding medicare to everyone
It's worth remembering that if AI really can do everyone's jobs then it'll be wildly deflationary so there's no need to worry about pesky government spending on this stuff or paying people more. Spend spend spend, baby!
Ah youre worried it cant do that? Maybe it is mostly smoke and mirrors then.
The historic origins of UBI are from political parties that wanted most of those same things, too, especially raising the minimum wage and expanding medicare to everyone.
A strong minimum wage makes UBI more attractive. More people will want jobs in addition to UBI. UBI is also seen as a market force to naturally drive minimum wage up, because UBI offers workers more choices: more opportunities to build a startup or take a sabbatical instead of work 40 hours. The labor market has to compete with that "opportunity cost" in ways it doesn't need to care about today. It would increase liquidity in the labor market and in terms going all the way back to even Adam Smith, make the market more free. Wages would better reflect demand for the work if laborers had more choices at more times in their lives where and how much to work.
Medicare for Everyone and Universal Health Care make UBI simpler. Health risk is always going to be variable and insurance-like risk pooling will always be a good idea for society to defray costs in bad years from surpluses in good ones and defray costs from unhealthy people by considering how many people are kept healthy. UBI could be designed to try cover much of health care, but it is never going to be as efficient as a pooled single payer. If a country already has Universal Health Case, the conversations about UBI get a lot simpler. It is a lot easier to sell it is a flat universal grant. Your health care can be provided by a complex risk pool and smart accountants doing a lot of smart math on your behalf. Your UBI can be just a flat number. Simpler: you can think about how you spend your UBI without having to consider your predicted health outcomes in that period of time. UBI's flat universal value can be set on benchmarks that don't need need complex amortization schedules and risk analysis.
The Canadian Social Credit Party, formed to espouse UBI was one of the keys to building Canada's Universal Health Care and their priority was that first, then UBI. That still seems the best priority order to me.
Job guarantees and higher minimum wages are just UBI with extra steps, while lowering retirement age is just conditional UBI by another name. If you're giving people more money in exchange for nothing (or nothing of any value to anyone, as in the case of a job guarantee), it's effectively indistinguishable from UBI.
"When our grandparents built the hoover dam, the lincoln tunnel and the triborough bridge with a job guarantee that was just money for nothing - UBI with extra steps."
^ this would be an accurate representation of your opinion then?
That job guarantees exceptionally produce useful things doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth.
> doesn't mean that they don't overwhelmingly produce useless things, or things that are more expensive than they're worth
One could say the same thing about all the little art projects a hypothetical society on UBI might busy itself making. The pertinent difference seems to be one about scale and co-ordination. Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
>Job guarantees say we work together–through a centralised power–to build big things. Handing everyone cash leans more towards arts and crafts and consumption.
Creating busywork doesn't strike me as a particularly worthwhile endeavor, compared to idleness.
So the problem with 3 out of 4 of your challenges is that, right now, it means young people need to work more to achieve them. Money is an issue, but money by itself cannot solve it, it really needs to be backed with more people working. That's not going to happen, in fact, less people will work.
So without AI, the path forward is obvious: those 3 will become worse. Lowering retirement age, raising minimum wage, and expanding medicare won't happen without AI. They can't.
We already are reasonably close to a job guarantee. If unemployed people would accept any job, unemployment would drop by a lot. Not to zero, obviously, but a lot. Unemployment is also pretty low by historical standards, so fixing unemployment with a job guarantee can't fix much. We'll need something else.
> It's worth remembering that if AI really can do everyone's jobs then it'll be hyperdeflationary so no need to worry about pesky government spending on this stuff.
So yeah, I disagree. If you're going to assume AI will just jump to how capable it'll be 100 years from now, then you need to think a bit deeper. What AI effectively does, it provides capital-based labor. You buy a robot. Robot costs a lot, but operational expenses are marginal, energy and (maybe) "tokens". Add solar power, and let's say local AI becomes a thing, at least for normal robots, and you need nothing other than the initial cost of the robot.
Okay, so this will mean everything can be staffed with tens of thousands of these robots. Remote mine? No problem. 500 robots in your house? Why not. Cleaning very large facilities? Not a problem. Farm hundreds of square kilometers? Fine. Dig a canal to avoid the strait of Hormuz and just do it with shovels? Let's get to it. AI can be a universal machine that can do anything labor can achieve.
Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
> Obviously AI will massively increase the output of the economy, and people will figure out what to do with that, as people will want a shitload of things done. Which means the problem you're identifying will be trivial to solve, and we'll figure something out.
Historically, that "we'll figure something out" has usually meant the economical wipeout of large parts of the population, sooner or later followed either by some epidemic event or other "act of god" (like fires) that was a consequence of squalor and poverty, or by some sort of war to thin out the herd.
I'd prefer if history would not repeat itself for once.
> Historically, that "we'll figure something out" has usually meant the economical wipeout of ...
Uh, historically everything has usually meant the economical wipeout of large parts of the population. It still means that in most third world countries. Economic power is not the huge differentiator here.
Like the post above says that there are multiple issues at play with AI. The same can be said about universal income.
The pay levels are not comparable because you are also recompensed with time. You may choose to spend your time in a number of ways that you find rewarding that also reduce your expenses. Making your own meals, clothes, furniture, beer, wine etc. There are a lot of people who would enjoy doing these things but are too time poor to do so.
Your expenses also reduce by the amount you must spend in order to make yourself available to work. Travel, work clothes, medical certificates when sick. You can spend a lot in order to be paid.
If you want a world with a reasonable distribution of income levels. It stands to reason that those receiving more right now should receive less. Certainly, the absolute wealthiest should reduce the most, but on a global scale, it is hard to defend that those in the top 10% of incomes should retain their position.
The proposal for how much a universal income should pay is a variable to be argued itself. I can certainly see it being argued for at a lower level than ultimately desired since something is better than none.
In a sense the end state of a universal income in an equitable world would be remarkably simple. The income available divided by the world's population,
Those reviving more than their share now may not be happy about it, but I'm not sure they have a right to their larger portion either.
The fourth aspect to discuss is how do we want to restrict the influence of AI companies on politics? Will we allow the CEOs to implement Thiel's vision of a world run as a company with CEOs at the top via massive monetary influence on political decision making, effectively abolishing democracy? If they really manage to replace 50% of the workforce with AI, their influence over everything from regulation to elections to social security networks as well as foreign policy will be enormous.
There is also a likeability problem. Altman and, shockingly, to a lesser degree, Musk have terrible brands. When folks see those people at the top of these companies, folks who have been publicly saying they're going to cause massive job losses and cause human extinction or whatnot, they're going to hate the companies irrespective of the actual risk of job losses or environmental impacts.
I'm curious for metrics, but Dario strikes me as being less perpetually online. Given equal time, they may each be unlikeable. But they don't put themselves out there equally–Sam and Elon are unable to focus on their work. (I'll admit I've had a soft spot for Dario since he stood up to Hegseth–maybe I'm just not seeing the equal hate he's getting.)
> How do we handle AI doing creative work? How do we treat AI creative work? How much creative work do we feel comfortable handing over to AI?
Just as a good for thought, looking back into history, during the late 1920s, mass production had a critical impact on Art Deco [1]. Artists were divided on the question if mass-produced art (using new industrial methods) could have a quality similar to hand-crafted art. It is clear that different people will have different opinion on the subject.
The technology is not there yet, but one example of mass production from AI would be book adaptation into movies. I'm sure that there are many other examples hard to predict that might: empower people, degrade art quality, improve art quality, divide people or maybe gather people.
I think you're missing one of the major reasons people are against "AI": the jerks at the top. When obviously nefarious people are lining their pockets and not bothering to even pretend to care about the people around them, it's no surprise they're hated.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income
Every call for UBI should be qualified with two estimates:
1) How much money you think UBI will pay out
2) How much money you think the tax will generate
Creating a UBI program with AI taxes sounds like a clean solution to something until you do any math.
If we estimate today’s AI revenues across all the big providers at $100B annually (a little high) and divide by the population of the US, I get around $24 per month per person.
So a 100% tax on AI plans would allow us to give UBI of about 80 cents per day.
Even 10X the revenues wouldn’t make bring that to parity with UBI expectations. A 100% tax would also be an incredible gift to foreign AI companies that could offer similar services for half the price to everyone else in the world.
This is based on the assumption that AI is going to take all our jobs. If this is true, than as more jobs are absorbed by AI the revenue would increase.
I think you may be going too far, as in your critiques assume the tech is further along than it actually is. There are three fundamental problems for mass AI adoption/AGI:
1. Lack of memory/continuity
2. Lack of agency
3. Lack of self-awareness
Based on my understanding of the basic 'loop' of an LLM, solutions for these may be decades off or not possible. Which leads me to the fourth problem:
4. Lack of compute
To get anywhere near AGI we need massive context windows. The whole thing is a mess.
I think people really confuse their imagination and expectations with reality. There's so much talk about AGI and mass layoffs. Then there is my experience.
I was talking to Claude and ChatGPT, trying to fix an issue with a simple function in Rust, which is returning a boolean depending on day of week and time of day. The logic looked ok to me, but tests were failing. Notably, my real world data derived tests were succeeding, while brute-force/comprehensive tests written by Claude were failing. I wanted those "just to be sure". Both Claude and ChatGPT were spinning their wheels, introducing fixes, then undoing prior fixes, so on and so forth. They also updated tests. We were going from one failure to another, while they confidently reassured me that "this is the fix", they found the "crucial bug" etc. etc.
Turned out my logic was correct from the beginning. My tests were correct. Claude's tests were broken. I realized this by writing my own brute force test. Just a simple loop with asserts and printlns to see what is failing. I did what the machine was supposed to do for me. In less than 5 minutes I fine tuned the test to actually check what it was supposed to be checking and voila. The "fast" thinking machine episode took me 2 hours and only produced frustration. Sorry I should learn to speak the language - AI reduced my development velocity :)
The only poverty I see coming is from collapse of quality after these dumb machines are used to replace people, who actually know what they are doing.
They are? Is your LLM ready to run your organization without further input from you or anyone? Do you realize that "memory" requires eating your hilariously small context window?
Have you not had a discussion with Opus where it insists it is correct about something it is objectively wrong about for several turns?
That seems like an unreasonably high standard. I like to think that I have memory, agency, and self awareness, but I'm not ready to run my organization without further input from anyone.
> Do you realize that "memory" requires eating your hilariously small context window?
I do! LLMs are structured differently than humans, so the component we call "memory" corresponds to what humans call "short-term memory"; practical long-term memory for an LLM looks much more like what a human would call "let me write this down". But you can and commercially available systems do load it into context on demand when it's needed for some problem or another.
The LLM only currently has the illusion of these things. Hence the bubble.
I know that you (or anyone) as a human being don't have the illusion of these things.
This is not like the car replacing the horse for transportation. The LLM as-is cannot fundamentally replace the person. They require the agency of a human to take turns at all, and even more so to enact change in the world.
Your LLM does not actively engage in the world because it does not experience anything. It only responds to queries. We can do a lot with that, but it's not intelligence. It can't say oh hey SpicyLemonZest, I was thinking and had an idea the other day. Because it has nothing between each query.
I don't think the last two critiques are good critiques at all. The environmental impact is a function of our energy sources not energy uses. Complaining about energy and water when we have infinite energy beamed down to us surrounded by a planet that is 70% water seems silly.
And AI "Ikea-fies" art and creativity. It doesn't get rid of it. Of course you can get a generic table from IKEA, but for a real unique piece, you need to go to a real artist. Always.
The real main critique is for AI jobs that are a one-to-one replacement, your taxi driver, your dock worker etc. I don't think UBI is a viable solution (I used to) but nothing replaces the community and status that a real job gives you. This is going to be a tough one.
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with.
In the same way that it was straightforward to deal with job loss from the industrial revolution, or when the US shipped away all its manufacturing capability?
The main critique of AI is that it's a dumb hallucinating parrot. It can't do genuine human quality work at all, outside of extremely narrow domains like basic translation and copyediting. Even for Q&A, while it can be useful by quickly accessing a huge storehouse of learned knowledge, the vulnerability to hallucinations means that human expert verification will always be required.
I'll note that there can be multiple main critiques coming from an incoherent set of viewpoints, since this is public opinion we're talking about.
Between "AI doing creative work", if you believe, and "fraud", there's all the low-key filler material that's sub-creative and sub-fraudulent. There's a similarity between the phrase it was made with AI and phrases like I didn't bake your cake myself, it came from a store or sorry, it's just a cheap plastic one. So part of AI's image is that it's a flourishing new source of disappointment.
Universal basic income is not an adequate replacement for a good career. Universal unconditional prosperity might be one, but it's not clear whether AI can really do that.
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society. The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
The concern I hear the most (which I don't think is common among the general public) is the existential risk one (that an AI may be created that drastically exceeds human intelligence, and that it may accidentally be incentivized to take actions that destroy most or all of human civilization).
> concern I hear the most (which I don't think is common among the general public) is the existential risk one
Altman and friends' "stop us before we shoot grandma" PR tour in 2023 and '24 is largely the cause of this AI backlash. If you tell everyone you're building something that will kill us all, you will scare up investors. But you'll also turn the public against you. In truth, we have zero evidence of the alignment problem to date in the existential form. Instead, it's the usual technology enabling bad actors stuff.
The "alignment problem" as traditionally understood assumed a different path to AI development, where the best AIs wouldn't primarily operate on a substrate of human language. If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem, and it seems no less likely today than it did in 2023.
> If AI becomes powerful enough to make human employment non-viable without being post-scarcity enough to make permanent unemployment viable, that's going to be an existential problem
That's massively moving the goalposts on what counts as "an existential problem." The original framing was not economic dislocation but actual existence, i.e. existential. This new framing is a retreat to a way-of-life argument.
And I'm still calling baloney! The "AI will kill us all" argument backfired on Altman et al, so now we have an "it'll take over all the jobs" pitch. But it's all smoke and mirrors for investors. We have no good reason to expect current AI methods will lead to an AGI that can not only do most human labour, but do so economically competitively.
I don't understand how you can consider the AI industry to be in any sense retreating from prior claims. The existential problem remains an active near-future risk; you're hearing a lot about the jobs problem because it's already here, now, today. Do you not remember how much less capable AI systems were in 2023, and how implausible it seemed that they could become as good as they are now without new theoretical breakthroughs?
In that sense the general public is less superstitious than many technologists. Some of the general public might anthropomorphize too hard. Which is pretty tame compared to the belief of the alien AI intelligence sprouting and killing us accidentally or intentionally.
As far as the paperclip problem is concerned, we’ve already had that problem for a long time now in the form of good old fashioned human institutions.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income
Problem for jobs is that there are 200 countries and all the earnings will go to a few. Universal basic income for everyone? Or just the US?
Who gets to keep their house locations in a new fair world? The person whose parents bought in the right place 50 years ago? Who pays the money these models earn, if nobody clicks ads or does a job? What is income for if we don’t work and can just ask the AI for everything we want?
What happens when the super smart AI comes up with “better” (more fair, consistent, etc) answers than you think you have to questions like the above? What if they end up socialist? Do we force it (and invite risk it escapes and fights us for the greater good) or give in to the presumably more thorough reasoning?
Needing less offices, less people driving to those offices, less A/C and heating for those offices and less resources building those offices could offset the energy usage of AI.
We can just turn all the office buildings into datacenters, they already look like heating vents! cover them in solar panels on the outside to cover the windows, and done!
I understand your points, but I think what scares people is that the solutions you propose are disregarded by our politicians. At least in the US, both politicians and the large donors funding them seem to be more and more allergic to anything resembling an universal basic income, and they do their best to scare people away with fearmongering about “communism”. The US is also doing a hard U-turn away from environmental protection and is trying to frame environmental conservation as radical and harmful. Other countries might be doing better on these fronts, but it’s definitely not a good sign that the US doesn’t seem to be on board with your first two solutions.
In the more immediate run, I think the concern is that AI will reduce the ability of workers to collectively bargain and thereby grant the wealthy oligarchs even more control over their workers’ lives.
I completely agree that governments and power brokers will disregard these solutions unless forced.
However, they will also disregard any attempt to slow down or halt AI progress in general, so it isn't like the people wanting to end AI in general are any more likely to succeed than those wanting to do what I propose.
I personally feel my suggestions would be slightly more feasible to gain support for than trying to stop AI completely. The power brokers in control of AI currently certainly aren't going to stop developing and pushing AI, but they might be convinced that sharing the wealth is the only way to avoid massive revolt in the long run. While it is conceivable that the wealthy wouldn't need the masses for labor like they do now in the AI future, they still need to not be killed in a massive uprising when 90% of the population is unemployed and starving. While I know a lot of people think the plan is just to kill off that part of the population, that is not that easy to do even with an army of AI robots, and would likely be cheaper and easier to just share a bit of the productivity. I don't think it will be trivial, but I don't think it is impossible.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income. There are obviously a ton of variations on this idea, but I think the general premise of sharing the gains with everyone is sound. I don’t think many would complain if they lost their job but kept their income.
Nice, but completely unrealistic. The whole reason why AI is/will be adopted by companies like wildfire is to cut costs and increase profits. If they have to pay taxes equivalent to what they were paying in labor (or anywhere close to that), then AI is for nothing. Business will never agree to it. So this will never happen unless there is some sort of social revolution that completely remakes the system.
Your first critique has a massive hole in it, because it assumes that every person whose job is replaced by "AI" is actually going to be done as well by the "AI" as it was by the human.
To the extent that it's even true that this is going to happen, that is laughably false.
Yes; LLMs can answer some questions well (but unreliably) and with the right setups, can be rigged to perform some tasks well (but unreliably).
There is no way they are ready to take over a single full-time job. If any employer tried, the number of errors in the performance of that job would jump by a huge amount, because LLMs are not reliable and cannot be made so.
There is another critique that is not specific to AI but I think is bigger than all of these: that a relatively small number of large companies, and the small number of very wealthy people who control those companies, have an outsize influence on many aspects of society. AI is the poster child for this right now, but tech companies in general are also reviled, and more generally all kinds of companies (media, fossil fuels, etc.) are targets of opprobrium.
From this perspective, the main irritation of AI is that it is the biggest, most intrusive case of "some rich guy is messing with my life". This is driven largely from the willingness of a small number of rich people to lose large amounts of money shoving AI down everyone's throats in the hope that that will eventually lead to them recouping those losses.
I believe a significant amount of AI criticism is really about this, and that means we need to resolve the overall issues of wealth inequality and economic skewing. People would be much less angry about AI if its development and ownership were more diffuse, and if the patterns of its use were more directly connected to its current observable abilities, rather than based on what some group of insiders thinks about how much its stocks may go up in the future.
I am not sure if inflation will work exactly the same in a world where AI/robots do all the work.
Inflation is driven by scarcity. More demand for a fixed/limited resource drives up the price. Historically, every good and service humans bought followed this pattern, so we didn’t even have to consider an alternative.
Already in our current economy, however, we have seen a good portion of our economy shift to things that do not have this characteristic. For example, take something like a video streaming service. The marginal cost for additional demand is small enough to be almost negligible; if everyone in the world decided they wanted a Netflix subscription, there wouldn’t suddenly be a shortage of streams or a run on episodes of The Great British Bake Off. They would have to build more datacenters, but the cost per additional user is tiny compared to almost every other traditional good that came before.
If AI and Robots start doing all work, then this would spread to more of the economy. The increase in productive capacity would severely reduce the limitations that have historically driven inflation. We obviously have to invest in building robots and AI, but once we have enough robots they would be making more of themselves and we would be limited by natural resources, but we could use robots to get more of those, too… and we could focus on clean energy, since we would have plenty of robots to do that work, too.
USA will never have UBI, period. So any idea that includes any mention of is an absolute non-starter. Outside of the USA, perhaps, but for us that is never happening.
In my opinion the main, and really only, issue: AI is a necessity. Everything from war (including defense departments), to jobs, to rental advertisements, to food packaging, to restaurant reviews, to news, to education, to programming, to architecture, to politics ... will have to change due to AI. Not changing them is not really an option. Everything needs to be figured out here.
A lot of this will both cost money AND require people to change their jobs, their investments, their equipment, ... And they hate it.
Everyone, including governments will have to adapt.
And to add insult to injury, everything comes from the US and it's really expensive.
No, you're backwards. The first point is definitely the most important and most tricky.
UBI is a dangerous distraction in this context. It's a mammoth cost to achieve an impoverished quality of life. It may be worth implementing in general, but it absolutely must stay out of the conversation about AI. It's like if the ruling class started announcing that they would like to imprison us all, and your "discussion" about the problem revolved around how we can make our future jail cells feel as nice as possible.
We are allowed to regulate businesses. We simply don't.
I think frontier AI research should be outlawed until such time as there's a broad consensus on how society ought to deal with it. This would have to be coordinated internationally to be effective, but I think that would be achievable if the US sent a credible signal by forcibly shutting down any one of the major labs.
Even supposing we could somehow get the political will to do this, how would you write such a law? What counts as “AI frontier research”? How would you write a regulation around that that isn’t trivial to bypass without banning general computing itself?
As I said in a sibling comment, we're fortunate that training modern AIs requires large quantities of specialized compute. We just have to restrict GPU sales and outlaw GPU farms. I don't deny that it would be a seismic, controversial change, but I don't think it's terribly hard to implement if we can reach a consensus that we want to implement it.
It means that if something is physically possible, someone will be doing it, regardless of legal, moral, or social barriers. False on its face? Not that long ago, global public opinion was mortified at the news, that newborn twins in China have been genetically modified. I am old enough to remember the outrage in the late 90s as the world watched the first cloned sheep grow up, get sick, and die. It was possible to do, so someone had done it.
The point is - with the use of law, morality, social pressure, we can moderate the frequency and scale of some phenomena, but we cannot stop it. I think this idea is what prevents some bans. "If the Chinese can do it, and we stop ourselves from doing it, they will gain an advantage and we would lose". Substitute "the Chinese" with whoever is the opponent at any given point in time and you have a rather plausible explanation for why things were the way they were.
There were historical worries about whether a ban would be feasible, but frontier AI research as we understand it today requires large amounts of specialized compute. Even if we couldn't or wouldn't destroy the chips, we could imprison anyone who tries to start a large training run, the same way we imprison anyone who tries to buy enriched uranium.
Yes, that is true, but it's not my point. I am not saying it'd be impossible to find people who are doing it. My point is that there will always be a group of people, who'd be willing to do potentially dangerous things as long as those things are possible and are believed to provide some sort of advantage. For that reason, those people would either be in decision making positions or have a good enough offer to decision makers. Speaking of uranium - I don't think AI is anything like it (although the AI industry propaganda really wants us to believe that), but even there we have examples of countries that were pursuing nuclear weapons both successfully and unsuccessfully as well as countries that could have them, but choose not to. So the ban itself isn't necessarily the main point here.
It is quite telling that so many comments here are about UBI as a solution. UBI is a billionaire proposed solution, or distraction. Yeah, of course they want to keep control of the surplus and just have a sustenance spigot for the former workers.
> We are allowed to regulate businesses. We simply don't.
If workers are defunct, what are businesses? Also defunct. Business owners can’t gloat about not needing workers while at the same time claiming that their businesses have a right to life. What is a business owner sitting on a completely automated set of assets? Smaug sitting on his cache of gold.
> The first is the fear of job loss, and I feel like this is the most straightforward to deal with. Personally, I think the solution should be to share the productivity of AI with society at large, in particular since AI owes most of its abilities to training on the works of society.
This is straightforward? This is a colossal task. Monumental. Billionaires own it. That’s the political status quo. You could build something to counter those centers of power. But from what base?
Well-paid software developers have scoffed at or been ignorant of worker organizing for, maybe forever? But I have good paycheck and equity... Now what?
But it's not a clear way to solve the issue. UBI, even if enacted tomorrow, doesn't stop the enormous crash of the middle-class, and the fallout of that. Maybe it will stop some people from literally dying - that's "solved"? It's a small buffer at the very worst end of a gigantic problem. The word "solve" is totally ridiculous.
Okay you’re right. In some sense of the word it is straightforward. But I still think it is not straightforward compared to most things.
I can get more muscle mass at the gym. That is straightforward. Only a few things makes it not easy.
But “share the productivity with society at large”... you have to collapse so many more variables.
- How to organize political resistance against AI tech billionaires
- How to not get co-opted by counter-measures by AI tech billionaires
- How to resist false promises (backed by nothing) that some AI tech billionaire will enact UBI for everyone so everything will be fine (those with all the power can withdraw whatever they want at any point)
- How to deal with white collar competition in the interim period before automation: everyone using AI and nodding along with it[1] just to not “fall behind”
- How to potentially fight against a small minority (AI tech billionaires) but that now might have enough megawatts to turn their stochastic parrots against any dissenters
Like copyright. All modern LLMs are built on troves of copyrighted material that was used in their training. AI companies are claiming this is fair use, while pretty much all of the copyright holders would strongly disagree. This is going to get litigated for years, but regardless of what various legal systems decide, morally, people can be against this.
And people are already sick and tired of AI-generated content being used to replace human made content, be it on Spotify or TikTok. This is part "AI replacing humans", part "I'm being scammed by lower quality content".
This is the "safety" messaging that OpenAI and Anthropic keep harping on and on, and on about, while whistling a merry tune as they turn around and sell AI to the US military and worse, to the tune of $billions/year already.
The "and worse" needs elaboration, because fundamentally the single biggest cash cow for AI vendors will be (and maybe already is) implementing a dystopian future where everything we say, type, or do will not just be recorded but also: read, analysed, and cross-correlated by unfeeling heartless machines tasked with keeping us in line.
I'm not being paranoid, President Biden said as much, but only in reference to China. If you think only China has motivation to use AI to keep a lid on dissent, I have a bridge to sell you. And if you think the Land Of The Free(tm) will never abuse AI in this manner, well... I have some bad news. You may want to sit down.
Here in Australia, the cyberpunk dystopia is already starting to be rolled out. A customer of ours asked their IT team to hook up a variety of HR-related information sources to their new AI system tasked with making recommendations for hiring, promotion, and demotion.
Yeah, AI-enabled surveillance capitalism is likely to be every bit as bad as what people imagine China is doing with their social credit scores.
And the scary thing is that you can probably easily sell it to Democratic voters if you track racism scores for people, so you can filter people out of your dating pool or job/rental applications. Most people don't care about privacy as a fundamental right, and they'll roll over and compromise if you give them a way to track what they hate. You just need to make sure it is "bipartisan" and it'll be wildly popular.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
The very same CEOs are extremely against social support, any taxes for themselves and any govermental agencies that help or protect people.
How is can this be possibly easiest in the world of Thiel, Musk, Trump, Vance, Palantier and overtone window moving toward economically conservative for years.
Picasso famously said "Computers are useless, they can only give you answers."
You can't put things back in the bag. Perhaps the true underlying social problems are:
1. There's too many humans and not enough jobs.
2. The capitalist system only rewards profit seeking and cost externalization.
3. Our democratic representation myth is dead and buried.
4. Even in the developed world, middle-class security is gone.
So here's my question: given the current global system has failed and is clearly in its death throes, as a pan-national species how can we transition to a less mono-focal economic rationalism driven means of governance and self-organization without turning in to an autocracy or reinforcing negative nationalist bloc-level thinking that will tie us in to the same old human-thump-human stone age ape-ism and environmental cost externalization?
Perhaps AI can help in areas like improved education, improved media, proposals for improved government process or process transition for enhanced efficiency. Enforce transparency and accountability in the halls of power by reducing human process and corruption. Public auditable decision making and public auditable oversight. It's at least potential grounds for partial optimism. The best I can summon under present conditions. Of course, we want to avoid a dystopian global AI autocracy, the technocratic basis for which we have already well established, but if you view the present system as a dystopian human autocracy with the same technocratic basis (an increasingly rational perspective given recent events), then it starts to look more rosy.
> The easiest way would be a straight tax on AI usage, and using that tax to pay a universal basic income.
If the Epstein class wouldn't go for something like this in a world where they needed workers to produce, the idea that they will when we are surplus to requirement is inconceivable.
Why put a number on it? Every number so far has been wrong. Can we agree on the negative impacts of humans on an environment conducive to humanity without putting obviously wrong timings on predictions? I bet your intention is to provoke urgency but to most people it just causes an eye roll because it's not true, whereas the underlying ideas are true.
Very much agree. It's a pretty common mistake to bundle real information with obviously wrong details and lose credibility. Especially in the eyes of people looking for a reason to discredit the argument.
The disingenuous people who discredit climate change will do so no matter how serious people act. There is no point in changing behavior on their account.
The point is to convince people who are undecided. Using information that's known to be false or weakly supported is then short-sighted and counterproductive, because enough false predictions will turn up that those undecided will tune out entirely
I think their point is that discounting the time estimates is more a constant shifting of the window of what we expect more than them being de-facto incorrect. They’re more off by degree (e.g. an XX% reduction vs complete extinction) than being worthless. As the example points out a large reduction can be very similar to an annihilation it’s just that we are only used to what we know so we constantly shift what is normal.
You have sailed past the point. There were so, so many cod it was hard not to catch a bunch. That isn’t a metric, it’s an indicator that most likely meant vast unseen numbers. The tip of the iceberg is a metaphor for a reason, though it may become an anachronism within our lifetimes.
So make predictions about stuff that happens next year and be right about them. The problem is that strongly predicting what will happen in 30 years has always been wrong so far. My point is just focus on what you know. Anyone can say whatever about 30 years from now and ride that for the next 29.
>strongly predicting what will happen in 30 years has always been wrong so far
No it hasn't, this is climate change denialist nonsense. In fact no less a figure than ExxonMobil correctly predicted the trajectory of global CO2 levels and corresponding increase in warming as far back as the 1970s and their predictions remain accurate today.
I've been alive long enough that my hometown was supposed to be underwater several times already. Climate change is real and predictions have also been very wrong.
because whales can communicate into the thousands of kilometers range and nowadays, because of marine traffic, they are luck to get into the hundred meters
micro-plastics into the ocean don't have a good prognosis on numbers reduction
Right. Just because some predictions weren’t accurate, doesn’t mean they were directionally inaccurate. You biodiversity and total volume of plant/animal/marine biomass that’s not human or commercially consumed by humans has depleted in the last 50 years and it only accelerates every year. There objectively will be fewer whales, if any in 50 years. Life as we know is ending and has been for decades.
Because they think it might make people give a shit enough to do something to change that outcome?
Fear is a strong motivator, but it is not a good one in this case. To really be effective, there must be the threat of direct, immediate, and severe consequences.
Instead it causes people to treat their messages as hyperbolic and undermines their entire movement.
tl;dr is there's very poor ROI to do nothing to improve our polluting habits and banking on the world sorting itself out.
Furthermore, most actions we can take to improve climate outcomes can also improve societal and technological outcomes. The only downside to taking more actions to have clean energy and less pollution are based on made up economic rules that normal people are supposed to follow, but that the super rich/powerful skirt at their leisure. A cleaner future benefits the VAST majority, irrespective of climate change. And the bonus is that if climate change does progress, we're better suited to manage it.
Or we can keep burning liquified dinosaur bones and partying like cigarettes don't cause cancer. I get the appeal of the 60s for how care free people could be - they lived without consequence. And we're stuck dealing with their failed policies.
While I have no problem blaming the rich. You are post here you are most probably part of those people who are skirting it at their leisure. Even I with a life long devotion to climate and environmental issues have a hard time to be a positive effect. The only way to not skirt your responsibilities right now is to be a Greta Thunberg.
> liquified dinosaur bones
I know this is a nice factoid that does not need to be true. When I was 13 I did believed it, so now days I try to not spread this factoid. We can talk about the fascinating history of millions of years of efficient carbon storage on our planet.
The rich and powerful bit was specifically around how we could easily do more for clean energy and pollution if politicians and ultra elites stopped acting like it's economics preventing us to do so. World powers are fine to go to war on a whim, but the second we talk about health care, cleaner energy, pollution, or other topics that will broadly benefit humanity, we are met with "this is too expensive to do".
And: on the 'r' side of the r/K reproductive strategy. Whales are literally the exemplar of K-selection, that is a very small number of high-quality offspring.
Whale lifespans are long, populations and fecundity / brood sizes are small, sexual maturity relatively late, and childhood mortality relatively high. All of these make for slower rather than more rapid evolution.
Species such as krill (on which many whales feed) are far more likely to evolve rapidly in the face of increasing selection pressures. Whales might well find themselves boxed into an inescapable evolutionary corner.
Evolution of small things like algae and the krill which feed on it and feed the whale is quite fast. Single celled organisms reproduce on the scale of 20 minutes and hold immense amounts of genetic diversity in their populations to facilitate the success of a better adapted line almost immediately. Additionally, they are adept at horizontal gene transfer from other well-adapted organisms.
This would be great news if the whale literally only required krill to survive, but complex megafauna have complex needs, so the ability of krill and other small creatures to evolve is largely irrelevant in a discussion regarding the ability of megafauna to survive. This is especially true if you read TFA and see that the whales already adapt to eat different things as necessary.
Algae are the bottom of the ocean food chain. Everything interacts with it. But algae's happy to grow in a bowl of water left in the sun.
Lots of things eat krill and small fish. They're near the bottom of the foodchain too. In addition to algae, krill are opportunistic omnivores who often consume detritus. But their primary diet is algae. Small fish tend to be pretty similar.
It's not that other things don't interact with algae or krill or small fish, it's that those groups are the foundation bedrock of the ocean ecology. And single celled organisms like algae are tough as nails in aggregate. Couldn't kill them all if we tried. Pool owners will be familiar with the struggle.
But it's not a bottom up interaction. If whales are killed off from climate change, then those other things can get out of control. Too much algae, and then you have hypoxic environments.
A perfect example of this is when sea otters were nearly hunted to extinction which caused sea urchins to flourish which caused the death of coral and coastal environments which started to affect the larger things that depended on those environments.
My point is that any change to the careful balance can have non-linear effects.
I think we're coming at this from different directions. The OP I responded to originally said: "Warming will kill off most of the systems these animals depend on within 30 years." which isn't what you're talking about. A top-down extinction looks like whaling in the 1800s and we already had that. Now they're on the mend.
It could easily become this fast or even faster, if we would just stop worrying so much about "playing god" and focus instead on getting good at this job. We don't have much time for this either, as AI is on the trajectory to take over that mantle in the next decade or three, whether we like it or not.
But seriously, we may not have much choice. Natural evolution stopped being able to adapt to environmental changes after it created us; genetic engineering is essentially the only way to make biology adaptable enough again.
The next question is which traits to do you choose and the next question is which traits are better, because choices will imply ordering, and then you open a big can of worms that last time killed millions of people. So maybe there's other ways to avoid doom that didn't create doom last time we went down the path.
Unpopular opinion for obvious reasons, but probably the only realistic one apart from just witnessing one extinction after another. Pollution and climate change aint going anywhere until we elevate whole world to the level of say western Europe.
But since we humans are pretty arrogant with our wisdom and lack long term patience, I can see many ways where well-intended meddling can end up in catastrophe overall.
Imagine you killed off all of humanity save for a couple people in Muncie, IN. How long until the next Shakespeare or Einstein emerges? Better yet, a properly heterogeneous culture?
What they mean when they say 'cached' is that it is loaded into the GPU memory on anthropic servers.
You already have the data on your own machine, and that 'upload and restore' process is exactly what is happening when you restart an idle session. The issue is that it takes time, and it counts as token usage because you have to send the data for the GPU to load, and that data is the 'tokens'.
Wrong on both counts. The kv-cache is likely to be offloaded to RAM or disk. What you have locally is just the log of messages. The kv-cache is the internal LLM state after having processed these messages, and it is a lot bigger.
I shouldn't have said 'loaded into GPU memory', but my point still stands... the cached data is on the anthropic side, which means that caching more locally isn't going to help with that.
I wonder if the 'English language speakers saw the biggest increases in unhappiness' is related to something else I keep reading about, which is that countries like Russia are spending huge amounts of money on campaigns to decrease stability in the west.
If they are making a concerted effort to drive the narrative in English speaking online communities, it would make sense that English speakers would be most affected.
I've been suspicious of this. It's typical for mainstream social media to become negative because that's where legitimately unhappy people go to complain, but things like Reddit and Twitter seem completely bot-farmed on top of that. And more and more people have been online, especially starting 2020. Even seems like Trump or his close contacts are browsing Reddit comments.
Look farther afield. For example, remember that brief "war" between India and Pakistan last year? Right after it ended, a couple of populations of bots appeared on X. One group of bots spammed anti-Muslim propaganda. Another group of bots spammed anti-Indian propaganda. We know this because when there were Internet outages later last year (in that part of the world), whole groups of accounts went silent at once.
Basically, there are foreign propaganda bot farms who don't propagandize their own populations, but instead focus on the US population. Generally trying to get Americans to turn on their (the people running the bot farms) enemies. Sometimes those enemies are countries, other times they are immigrant populations. Funtimes huh...
PS Yea, I know Israel does it, so does HAMAS and a bunch of other countries including both Russia and China.
reply