> Keep that in mind whenever you read breathless headlines about AI causing layoffs or dampening hiring. It’s something like 1% of all firms - split roughly evenly between “increasing employment due to AI” and “decreasing employment due to AI”.
"But though our thought seems to possess this unbounded liberty, we shall find, upon a nearer examination, that it is really confined within very narrow limits, and that all this creative power of the mind amounts to no more than the faculty of compounding, transposing, augmenting, or diminishing the materials afforded us by the senses and experience. When we think of a golden mountain, we only join two consistent ideas, gold, and mountain, with which we were formerly acquainted. A virtuous horse we can conceive; because, from our own feeling, we can conceive virtue; and this we may unite to the figure and shape of a horse, which is an animal familiar to us. In short, all the materials of thinking are derived either from our outward or inward sentiment: The mixture and composition of these belongs alone to the mind and will. Or, to express myself in philosophical language, all our ideas or more feeble perceptions are copies of our impressions or more lively ones."
- David Hume, An Enquiry Concerning Human Understanding
What happens if you flood the market with a bunch of implausible bets like "sun won't rise tomorrow"? Sure, you might try to filter that out with some sort of "seasoning" period (ie. don't buy new markets), but then that means more time for arbitrageurs to correctly price the market, depriving you of any price advantage you might have had.
This locks up your money in the meantime, right? If so, considering the fed funds rate is 3.64% (and you can probably get higher rates on stablecoins), a huge chunk of those "winnings" is going to be eaten up by the opportunity cost of the money.
You forget that Polymarket is just a casino, and the house always wins.
For example, recent events show that any bet can be selectively disputed by arbitrary reason ("we found insiders", "we found this immoral/illegal", etc.).
That logic doesn't work because not every bet have even payouts. If there's a market for whether a dice rolls 1 or not, the odds might resolve to "no" 83% of the time, but if it only pays you $1.1 per dollar wagered on "no", you're still losing money.
I tried it a few times and always found it disappointing. It typically started off like a structured "lesson" but as I chatted with it, it would forget the syllabus is had proposed and we never "completed" the thing we set out to learn.
I think this was a pretty honest write up of what went well and what didn't, and I think it's directionally pragmatic on takeaways.
> The agent knew the experiment ended at Day 30 since I told it as much in the system instructions, and so it played it safe. It doubled down on what was already working rather than taking creative risks, whereas a (good) human strategist would’ve experimented aggressively in weeks 1-2 and refined later. The agent just tried to ride out the month at a predictable rate.
> Then when I tried to fix quality myself (the email validation gate), it caused the worst performance of the entire experiment. Same trap that human-run campaigns fall into - optimizing for what’s measurable rather than what matters. Main difference is an AI agent just does it faster and with more confidence, which honestly makes it more dangerous.
> If you’re running any kind of recurring workflow where you pull data, make decisions, and act on them, the loop pattern here probably applies to your work already. The hard part is figuring out what to actually optimize for, and clearly articulating that.
Although I have to say I am sometimes surprised how much people burn through their usage. I was briefly on a Claude Max plan and then switched to a pro plan and still almost never hit my limit.
It's really not. As a one-person IT department I'm now able to build things in hours or days that it previously would have taken my weeks or even months to build (and thus they didn't get done). Things people have wanted for years that I didn't ever have the time for, I can now say "yes" to.
Yeah the ops alone is a huge win. It’s such a win I didn’t even think to mention it ha.
Dangerous too of course. So many times I’ve had subtle unexpected side effects. But it’s all about pinning thins down well and that’s what we’re all still figuring out well
> Is writing it by hand the old-fashioned way not on the table?
Of course it is. I started a (commercial) product in Jan, on track for in-field testing at the end of April.
Of course, it's not my f/time job, so I've only been working on it a/hours, but, with the exception of two functions, everything else is hand-coded.
I rubber-ducked with AI, but they never wrote the product for me (other than those two functions which I felt too lazy to copy from an existing project and fixup to work in the new project).
Absolutely not. I took on some thins that would normally take 5-10 people and many months.
Some people are turn out slop. I was really excited to try and make some impressive shit. My whole life has been dedicated to trying to embody what Apple preached in the early days.
I knew this was coming, but I thought I had a little more time to try and get them over the finish line, ya know?
Maintenance by hand might be achievable, but it’s extremely hard when you’ve built something really big.
I’ve only got so much savings left to live on.
I’m not saying anyone owes me anything, but we all need to pivot and in a lot less sure my pivot is going to work out now
> I took on some thins that would normally take 5-10 people and many months.
Based on what, exactly?
It's very easy to claim some software would've taken you months to make, but this is ridiculous. Estimating project duration is well known to be impossible in this field. A few years ago you'd get laughed out the room for making such predictions.
> I’ve only got so much savings left to live on.
Respectfully, what are you doing here?
Yeah sure, the Apple dream. But supposing AI did in fact make you this legendary 100x developer, so it would to everyone else including those with significantly more resources. You'd still be run out of the market by those with bigger budgets or more marketing, and end up penniless all the same.
I would strongly recommend you not put all your proverbial eggs in this basket.
I’ve pivoted to writing native iOS, macOS, windows, Linux apps. Most of my career has been front end web. It would take me awhile just to learn and practice, vs having my visions working in hours or days
I’m not ready to unveil the thing I alluded to, it’s important to me that it’s good and polished. But I’ve done quite well so far developing in Swift, Rust, Go, and coming up with marketing and design — things I definitely couldn’t do by hand without a lot more time and effort.
https://poolometer.com/
Is one of the things I’m almost ready to call ready. So much domain expertise or tedious math involved — I simply wouldn’t have bothered on my own, pre-AI
I agree it’s a huge existential risk that everyone is also amazing. So far that’s not true. I get hung up on a lot of little quirks, like getting Dolby Vision to play properly on Apple Silicon without Vulcan. Something I accomplished after about 2 weeks of relentless determination.
To be clear I’m just trying to answer your questions honestly. I understand the situation. It’s almost to my benefit the harder it is for non Software Engineers. But in our current reality, when I’m not launched yet, it’s more stress
> So much domain expertise or tedious math involved — I simply wouldn’t have bothered on my own, pre-AI
This is what I was alluding to. AI did not let you write software you couldn't otherwise make, or let you write it faster. You skipped doing the research because AI gave you plausible results, but without doing the research yourself you cannot be sure of it's accuracy.
That isn't faster software development, it's reckless software development, and nothing really stopped you from doing it before other than your own recognition that pulling numbers out of your ass is a bad idea.
> I agree it’s a huge existential risk that everyone is also amazing. So far that’s not true. I get hung up on a lot of little quirks, like getting Dolby Vision to play properly on Apple Silicon without Vulcan. Something I accomplished after about 2 weeks of relentless determination.
That would be "doing the research", and as you have observed, is the slow part then and now.
Ultimately, we need to know the true cost of this technology to evaluate how effectively or ineffectively it can displace the workforce that existed before it.
The only catch is that you’ve spent many $1 and you don’t get any of those $10s unless you get over the finish line
In that sense your analogy is kinda good. I totally agree the current situation is like getting my solo start up funded and subsidized … but with only like 4 months runway now that the prices are skyrocketing, vs ~2+ for a typical YC venture
Yeah, but... it's rocketing for everyone at the same time on all the providers at once.
IOW, you are no further behind nor further ahead than your competitors compared to 1 week ago, 1 month ago, 1 year ago and 1 decade ago.
Everyone has the same tools you have. The only advantage you get is if you make your own tools (I did that, and pre-AI, was able to modify my LoB WebApps at a rate of 1x new API endpoint, tested and pushed to production, every 15m).
My comment was about the rapid and sudden cost spike of something happening unexpectedly.
They announced 2x tokens with months of notice. This announced this with no notice.
Me as an individual making a go solo is not the same as thousands of funded businesses having free credits, subsidized plans and bottomless AI budgets.
For a short period this was a massive equalizer. Now it’s a tool for those who can afford it. That’s a big shift.
—
Why is it that a person cannot express their own circumstances or opinions on this site without it turning into an argument? It’s so deflating.
If my math is right, assuming a mix of around 70% cached tokens, 20% input tokens, and 10% output tokens, it breaks even with the old pricing at around 130k tokens per message, or about 13k output tokens per message.
With the hidden reasoning tokens and tool calls, I have no idea how many tokens I typically use per message. I would guess maybe a quarter of that, which would make the new pricing cheaper.
MiniMax M2.7, MiMo-V2-Pro, GLM-5, GLM5-turbo, Kimi K2.5, DeepSeek V3.2, Step 3.5 Flash (this last one is particularly cheap while still being powerful).
But it was well understood that the subscription was heavily subsidized. Whether or not it was a "separate product" doesn't matter as much as the fact that pricing was not sustainable.
It was not well understood that it would stop being subsidized without notice.
Does that just not matter in modern society? I’m an asshole for expecting the product I pay for on day 1 to be the same on day 8 and 29 of a 30-days subscription?