> What I want is an anti-costco. More like a bodega. Still curated, maybe a larger mark-up, but smaller quantities of everything. Half loaves of bread, small bags of frozen veg, enough sugar or flour to bake just a couple batches.
This is becoming even harder to achieve nowadays, there is all this variety in size of products and more and more over the years(at least in the midwest) it seems that grocery stores want to take the small product and apply minimums to deals.
there will be an 8oz offering and a 14 oz offering, the 8 oz will be on sale but only if you buy at least 2 or 3, its incredibly frustrating.
It has incidentally made my junk food habits better though, If i see 2 for 5$ for a package of cookies with no minimum purchase, I'll likely grab a box. As soon as they apply that minimum, i am gonna be thinking "do i really wanna eat all those cookies?" instead i end up with 0.
> the 8 oz will be on sale but only if you buy at least 2 or 3, its incredibly frustrating.
Have you tested this by buying just one, and checking the price on the receipt?
I ask because someone once told me this was illegal in the US; that a shop was allowed to display the sale price only for a larger quantity, but they had to honor the same price per unit if you only bought one. (I think we were discussing produce at the time, in case that matters.) I've long wondered if that was true or just an urban legend.
My grocery store does both. If the label says "Sale: 2/$5, Was: $3.99" and I buy one, I get charged $2.50. If the label says "2/$5, Single item: $3" and I buy one, I get charged $3.
Most likely that pricing rule was/is at a more local level. The national level in the US doesn't have anything like that, but there are some states or cities or counties that can and do.
iirc yes it did not apply on purchase as well, on the label it is always also explicit about cost if not bought in those quantities(2 for 5 in any quantity is still sometimes the actual offering). I am assuming my memory is correct because its baked into my shopping experience to ensure i am reading the label correctly now.
Meijer is slowly becoming a bad offender of these types of things, Jewel has been horrifying for years, to the point where i avoid their store entirely. The final straw was this limit applied to gallons of milk.
The computers are not dumb. If you do not purchase the correct number of items, the discount is not applied. Also, if you do not have a member/loyalty account, you do not get those discounts. They now have a new level that requires you to have their app for "digital" coupons that are on top of the loyalty prices. There are many times where I don't input my number in until the very end, and then see it calculate all of the deductions. Sometimes it's not much, but I've seen it drop $30 from the "member" price discounts.
I thought this was obvious, but to spell it out: I was suggesting that they might not necessarily be programmed to apply a different price depending on quantity. An item might have a flat price of $1 each, but labeled on the shelf/bin as "special: five for $5" to encourage larger purchases.
I have personally encountered this. Meanwhile, I do not recall an example of buying a quantity smaller than suggested and being charged a higher price per item. Hence my question about labeling and law.
> Also, if you do not have a member/loyalty account, you do not get those discounts.
> An item might have a flat price of $1 each, but labeled on the shelf/bin as "special: five for $5" to encourage larger purchases.
That's not a special, that's just math. I've only ever seen that kind of nonsense from Amazon. I've seen Buy 3 for $5, while the individual is $1.99. If you buy one you pay $1.99, if you buy two you pay $3.98, but if you buy three, you end up paying $5. The receipt will show 3 @ $1.99 with a discount under the item bringing the total to $5. My store routinely has various meat offerings of Buy 1, get 2 free. If you ring up one, it shows the price. If you ring up 3, it shows all three items, but discount the cheapest two prices so you only pay for the single highest priced item.
Major chains are not going to be futzing around with gotcha tags. They know they'll be called out for it. It would be the bodega style places that I'd be suspect of that kind of shenanigans.
> I'm not talking about membership discounts.
Why not? It clearly shows two different prices. If you are not using a discount/loyalty card, you pay the full price. A lot of times I've seen when you use a line with a human checker they'll have a card on stand by (probably their own) to get the points while giving the buyer the lower prices.
> I ask because someone once told me this was illegal in the US; that a shop was allowed to display the sale price only for a larger quantity, but they had to honor the same price per unit if you only bought one.
No, WTF? That's not a thing, why would you even credit such obvious nonsense?
Really feel like the current versions are for sure "good enough". Thats not how market capture is gonna function though and they are gonna keep pushing because the only moat is to stay ahead, so the problems gonna stay strange. at some point more compute isn't a reasonable answer, and optimization is, and my feeling is we are well past that point from a product perspective, but ipos etc etc
The only moat is the us trying to buy all the compute hardware in the world for the next two years. Then China, amd, etc are just making their own chips.
So I think the current generation of models are arguably all about the same in terms of capability. However, the requirement for exponential growth I mentioned is all about the economics.
AI companies are trying to ride a growth wave where the income curve lags the expense curve by 1-2 years, and at the same time investing 10x their historical income on next year's projected demand.
Everyone is selling their API calls at a loss, because to capture the investment required to scale the business up and the costs down, you need to grow your market now (in relative and absolute terms). And history shows, that in big tech you often have winner-takes-all situations, or, at least a couple of big firms will dominate, and the others will die. That's where market share becomes a key strategic goal.
But to secure that, they also need to be building next year's compute now. And if their anticipated compute needs are 10x this year, they've got a serious funding problem, one that can only be filled by capital with an appropriate risk appetite. You can only get this high-risk capital when the potential payoff is even more enormous, or, when it's a smaller bite of a much bigger pie. Hence, MS putting into OpenAI and so on. But the investment needs are getting so big we are starting to see some pullback from more conservative sources, but also record deals from others.
Now say an AI company does get the capital they need to grow. Well, they've still got a very serious supply problem. RAM, GPUs, water, electricity etc. Hence why there's a lot of deals and cross-investment going on - everyone is trying to secure resources and lower their overall risk exposure while keeping a foot in every possible door, so they can switch alliances whenever it's expedient, and because collaboration also helps the overall market to grow.
This all explains to me why the industry _needs_ the hype. These companies can't exist without it, because the money they need to sink in, in order to even be around in 18 months, far outstrips all reasonable financial practices. So it's capitalism on steroids or nothing. If you believe the AI story, then to that extent, it's rational.
But note that nowhere in this scenario does it suggest the actual consumers will be getting a consistent product at a consistent price!!!
I've found with opus 4.6 which im still stubbornly using i can burn about 10% of the weekly within a 5 hour window with my workflow.
Mentally i think about the weekly usage in terms of usage per day so about 14% per day which results in me not using that much early in the week so i can kinda "burn freely" later on. which leads me to a spot where usually on the final two days im sorta thinking about how can i expend that usage ive "saved".
the 5 hour windows make this harder, sometimes the final day of the week im trying to get that 10% in every 5 hour window of my waking hours and i HATE that, i wanna work when i am most productive, not around some ridiculous window of time, i dont wanna think "I am gonna be utilizing claude the most around 11am so i should send a dumb message to haiku to get my 5 hour window started at 7:30am so i can have it roll over at 12:30."
So im happy about this change sure. But it is 100% them creating a problem and pretending having some relief from that problem is them doing their users a favor. I understand they are doing it to lower peak hours usage and all that, I still despise it.
People are waisting tokens by using Opus for everything.
Using Advisor [1], you can use Sonnet most of time; Sonnet can handoff work it can't handle to Opus. When Opus is done, you automatically go back to Sonnet.
I think the main reason that workflow has not worked for me is because im using an ide version of claude code, which means my main agent isn't a crafted agent and is "stock" sonnet or "stock" opus. I'll likely swap to the cli version soon enough and see if that remedies it (this isn't laziness on my part, i instead learned opencode workflows first because it applies more broadly, the only limitation is usage of a claude subscription within it).
So with the stock sonnet i get the chatty confidently wrong sonnet instead of a strict crafted agent. Stock Opus is a lot more reasonable, and hands off simple tasks to crafted sonnet agents with the chatty and more strict workflows, so i guess im literally doing the opposite(closer to what that old article describes).
I rarely use Opus for planning (in the Pro plan). Spec a feature in Sonnet, hand it to Haiku, come back for review. That’s a 5-hour window gone, sometimes 2.
I hit my weekly limit around day 4, with 2 maxed out windows per day (and sometimes a bit of usage at night).
I completely understand why people would use Opus for everything, it’s much more thorough and effective. Sonnet as well, but on Pro it’s gonna be Haiku all the time.
my workflow allows for about 10 windows being maxed out each week(if this threads claim is true that is now 5 windows), i always use Opus for planning and just have strict rules for delegation when its actually crafting the code.
I have a pretty nailed down .claude/ where the goal is single sources of truth, so agent md files all reference the relevant files for what domain they are working within with that domain's conventions and structure etc, i think keeping this stuff up to date is massive compounding context savings, as well as just better for performance because it keeps all agents context windows free of noise by helping them only load in what is actually needed.
I've never really messed with haiku for anything besides absolute low end repetitive tasks, its usually an agent i have crafted when i want to ask it to generate a bunch of seed data or generic questions for tests or something similar. My assumption is that it would just be terrible and even though its super cheap, it is still inevitably bringing the final results back to the better models and if thats not valuable tokens then im wasting the haiku tokens and the passoff to the better models with work that will be repeated anyway.
On the Pro plan you can max 10 windows per week using Opus for planning? I’m impressed. Even with Serena and really tight context management, I use Sonnet for most planning and Haiku for implementation. That gets me through the week doing 1 or 2 features and 10-ish windows of bug fixing.
> fully understand every change and every single line of the code.
im probably just not being charitable enough to what you mean, but thats an absurd bar that almost nobody conforms to even if its fully handwritten. nothing would get done if they did. But again, my emphasis is on that im probably just not being charitable to what you mean.
You're most likely being pedantic, like when someone says they understand every single line of this code:
x = 0
for i in range(1, 10):
x += i
print(x)
They don't mean they understand silicon substrate of the microprocessor executing microcode or the CMOS sense amplifiers reading the SRAM cells caching the loop variable.
They just mean they can more or less follow along with what the code is doing. You don't need to be very charitable in order to understand what he genuinely meant, and understanding code that one writes is how many (but not all) professional software developers who didn't just copy and paste stuff from Stackoverflow used to carry out their work.
How is that an absurd bar? If you're handwriting code, you'd need to know what you actually want to write in the first place, hence you understand all the code you write. Therefore the code the AI produces should also be understood by you. Anything else than that is indeed vibe coding.
A lot of developers don't actually understand the code they write. Sure nowadays a lot of code is generated by LLMs, but in the past people just copied and pasted stuff off of blogs, Stack Overflow, or whatever other resources they could find without really understanding what it did or how it worked.
Jeff Atwood, along with numerous others (who Atwood cites on his blog [1]) were not exaggerating when the observed that the majority of candidates who had existing professional experience, and even MSc. degrees, were unable to code very simple solutions to trivial problems.
its an absurd bar if you are being a uncharitable jerk like i was, the layers go deep, and technically i can claim I have never fully grasped any of my code. It is likely just a dumb point to bring up tbh.
I saw your reply to another comment [0], I see what you mean now. By "understand each line of code" I meant that one would know how that for loop works not the underlying levels of the implementation of the language. I replied initially because lots of vibe coding devs in fact do not read all the code before submitting, much less actually review it line by line and understand each line.
Well that is how it mostly worked until recently... unless if the developer copied and pasted from stackoverflow without understanding much. Which did happen.
I do. If you don't, maybe you shouldn't be writing software professionally. And yes, I've written both DBs and compilers so I do understand what is happening down to the CMOS. I think what you are doing is just cope.
nah, you're kinda encapsulating what i viewed in my mind:
at what level of abstraction can you claim to actually "understand" the code?
You're claiming to understand down to the CMOS, but you are failing to even engage with what level understanding should be accepted. is "down to the CMOS" the bar? because then you're gonna be on an uphill battle as potentially the only human who traces a simple hello world python script down to it, because thats not how people develop software with high level languages.
is understanding the print()'s underlying code the bar? seems fairly gatekeepy, its kinda intuitive what a print does, everyone trusts its gonna do what its designed to do in the same way we trust the water that comes out of our faucets.
Im locked in for a year of claude pro, I encountered the same issues as you a couple weeks ago, Id get like one solid plan done and really really hope it was a 1 shot because that was legit all i was gonna get out of it for those 5 hours, and it would be ~10% of weekly usage to really make me feel scared to hit send.
I got the 20$ gpt tier, and now i just use claude to craft MD plan docs instead, and then i hand them off to gpt 5.4 and it has been working great. can do about 4x as much work or so based on my feelings(not accurate). if i have just small simple stuff to do i might still fire those off with sonnet and that seems plenty viable, but as soon as its an opus tier task i swap to this workflow.
Little annoying as now im kinda trying to manage a .claude/ and an .opencode/ folder but i kinda just have the .opencode/ stuff reference the .claude/ stuff so its a little less bleh.
I've been keeping within my usage because ive been in a funk a bit, but when i was slightly more worried id sorta just juggle whether claude or gpt would handle writing some initial tests as it did seem to kinda be imbalanced otherwise. seems like gpt just spam resets weekly usage throughout the week anyway so its prolly nbd.
you could try customer support, that chat bot will happily loop you with some more non answers, but try to make you feel good about those non answers :)
This test makes perfect sense with their actions the last few weeks, they think they've done enough to transition into the general public and away from devs and our goodwill no longer is something they should be concerned with.
Its funny that openai, who in my eyes went for the general public rather than devs initially, seems to be semi pivoting and catching all the fallout from anthropic's recent behavior.
It is a massive bummer, up until those few weeks ago, i was hard pulling for anthropic for quite some time, now i just dont care and hope something dope emerges quickly that signals i wont ever have to consider either of them.
gpt allows you to wire their models into other CLI tools, I'm advising everyone I know to lean that direction. Not trying to become hostage to something like claude's ecosystem for the rest of my development career.
feel like its beyond optimistic on their part, just starting to hear their name be blended with companies desires on job listings, and they are destroying the goodwill of the devs who surely are the main reason their name has landed there. They aren't dug in like a microsoft, maybe they get some staying power for nocode people who feel trapped, but im done with their nonsense already and won't recommend them anywhere. Other stuff is good enough already to match.
they need the devs on board for that to matter, i can get whatever i want done with lesser models already. It is quite literally about just who is not gonna give me the shittiest experience, and at anthropic it sure seems they are determined to annoy everyone since they started gaining in popularity.
This is becoming even harder to achieve nowadays, there is all this variety in size of products and more and more over the years(at least in the midwest) it seems that grocery stores want to take the small product and apply minimums to deals.
there will be an 8oz offering and a 14 oz offering, the 8 oz will be on sale but only if you buy at least 2 or 3, its incredibly frustrating.
It has incidentally made my junk food habits better though, If i see 2 for 5$ for a package of cookies with no minimum purchase, I'll likely grab a box. As soon as they apply that minimum, i am gonna be thinking "do i really wanna eat all those cookies?" instead i end up with 0.
reply