Hacker Newsnew | past | comments | ask | show | jobs | submit | fizlebit's commentslogin

The big bet here is that anthropic stays ahead of the curve and is the goto tool for businesses. The risk is disruption to that leader position, or that it can't sell all those tokens because the value/cost ratio is too lower for consumers of them. I suspect that disruption is the bigger risk because the tokens in my experience are valuable, so as they innovate we're betting the value of the tokens goes up and the cost comes down.


I'd like write program / run program / debug program to be as easy as it is in roblox. It isn't that easy though, the set of things you need to do it well is extensive. I wouldn't be averse to a new platform, one in which all io is over highly performant queues, but the moras of existing software tied to unix is large, just look at compilers and all the child processes they launch. It was always shims and it will always be shims.


Scrolling through those images it just feels like intellectual theft on a massive scale. The only place I think you're going to get genuinely new ideas is from humans. Whether those humans use AI or not I don't care, but the repetitive slop of AI copying the creative output of humans I don't find that interesting. Call me a curmudgeon. I guess humans also create a lot of derivative slop even without AI assistance. If this leads somehow to nicer looking user interfaces and architecture maybe that is good thing. There are a lot of ugly websites, buildings and products.


I think if your university doesn't do in person exams with pen and paper then the degrees it hands out are not much evidence of anything.

If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.

I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.


I do feel like better application sandboxing is needed but so much open source software is built on the Unix abstraction meaning you have to run in a container, but macOS doesn’t have containers as far as I can see, and containers themselves are a bit of a poor abstraction, although maybe the best we can do with Unix at the core. I think something closer to Roblox studio would be cool where when you open an environment stuff just spins up in the background, but there is a good debugger, logging, developer ide, good rendering, eg 3d graphics, separate projects are separate, and when you spin down a game (read app or project) everything spins down.


Apple did actually introduce its own container framework in Tahoe, but it’s still early days. https://github.com/apple/container


These are Linux containers in a VM, I’m pretty sure GP is talking about native macOS containers.

Which: They do actually have some container-like sandboxing tech around applications (“iTerm wants to access your downloads folder”).


Yes, afaik macOS apps could theoretically be sandboxed as well (or close to) as iOS apps are. You can find the policies for many first-party apps and deamons in /System/Library/Sandbox/Profiles. But in practice most third-party apps aren't.

https://bdash.net.nz/posts/tcc-and-the-platform-sandbox-poli... and https://bdash.net.nz/posts/sandboxing-on-macos/ are good introductory articles.


It's a good idea so it can't take over your dev machine.

But not sufficient since it'll still F over whatever code you are working on resulting in a backdoored app getting deployed + infected dev scripts etc bringing interesting times to your teammates, downstream open source project users, your api keys and cloud credentials getting compromised etc.


I don't think it's viable to containerize an IDE. Running user code at full permissions is a core feature for an IDE. The programs that the user develops in an IDE could potentially touch any OS surface. When the user is a developer, you have to trust them.

Though this autorun feature is crazy and should be completely off by default.


apple has pretty good containers actually. why do you say they are a poor abstraction?


That what stuff like XPC and entitlements are for, which naturally programs from UNIX culture background don't care to use.


UTM is free and spins up native macOS VMs. If I absolutely have to write JavaScript that’s where I do it, since Sha1 Hulud.


Looks a bit like Rust. My peeve with Rust is that it makes error handling too much donkey work. In a large class of programs you just care that something failed and you want a good description of that thing:

  context("Loading configuration from {file}")
Then you get a useful error message by unfolding all the errors at some point in the program that is makes sense to talk to a human, e.g. logs, rpc error etc.

Failed: Loading configuration from .config because: couldn't open file .config because: file .config does not exist.

It shouldn't be harder than a context command in functions. But somehow Rust conspires to require all this error type conversion and question marks. It it is all just a big uncomfortable donkey game, especially when you have nested closures forced to return errors of a specific type.


You just described how the popular "anyhow" and "snafu" crates implement error handling


Even with anyhow there is a lot of boilerplate it seems to me dealing with crates that don’t use it. I haven’t tried snafu but its name does not inspire confidence.

Clanker (ai assistant) also love to unwrap and if you don’t catch them you have an abort waiting for you.


I like your "context" proposal, because it adds information about developer intention to error diagnostics, whereas showing e.g. a call stack would just provide information about the "what?", not the "why?" to the end user facing an error at runtime.

(You should try to get something like that into various language specs; I'd love you to success with it.)

EDIT: typo fixed.


I think that vibe coding now with anthropic tools and the latest model means that the cost of writing integration tests is significantly reduced. When the company ships a large product that has components from many teams, there is still a role for QA engineers who run nightly tests and chase teams to help diagnose the issue when there is an issue found. If you don't have such a central team publishing golden versions, then everybody is chasing the same bug. Ideally the integration tests are part of the change acceptance flow, but low frequency bugs (occur maybe 1 in 100 test runs) can still sneak through.


yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)


It looks from the public writeup that the thing programming the DNS servers didn't acquire a lease on the server to prevent concurrent access to the same record set. I'd love to see the internal details on that COE.

I think when there is an extended outage it exposes the shortcuts. If you have 100 systems, and one or two can't start fast from zero, and they're required to get back to running smoothly, well you're going to have a longer outage. How would you deal with that, you'd uniformly across your teams subject them to start from zero testing. I suspect though that many teams are staring down a scaling bottleneck, or at least were for much of Amazon's life and so scaling issues (how do we handle 10x usage growth in the next year and half, which are the soft spots that will break) trump cold start testing. Then you get a cold start event with that last one being 5 years ago and 1 or 2 out of your 100 teams falls over and it takes multiple hours all hands on deck to get it to start.


> I'd love to see the internal details on that COE.

You'd be unpleasantly surprised, on that point the COE points to the public write-up for details.


I actually prefer a game where the rules mostly come from the DM. I think it is better if there is no players handbook. The characters develop along their story arc, e.g. at some point you character acquires new powers, e.g. your character has been spending a lot of time developing new combat moves, they kind of level up and now the DM explains a new mechanic. Your character has become adept at disarming opponents and now gets such and such a bonus to attempt a disarm.

This is a lot to place on the DM, but I like the anarchy of a system like dungeon crawler classic. You expect some of your characters to die, e.g. in one adventure my character in a last ditch effort to save himself drank a potion of unknown origin, that potion turned him into a mithral statue. It was a fitting end to his short but eventful life.

Another character played by a different player managed through a long process involving books and negociations with his patron to construct a demonic sentient flying dog through whom he could cast spells and see.

This kind of exploration I think encourages players to see their characters much more as characters than machines to be min maxed and it is way more fun.

Give the DM total control to decide the dice roles that determine the outcome of the shenanigans. You try to hire an army of peasants you're going to be dealing with appointing sergeants, logistics, mutany, desertion all before you try to line them up to throw a ladder at some dude, which in the end is probably like a 1d20 >= ac for a chance of 1d4 damage, with of course crit tables, where on a critical success the dude might be tangled up in the ladder and fall over or something.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: