My agents often write themselves scripts. Isn't that effectively what you're asking for? Prompting for scripts can also be a useful time and accuracy tactic when you know it'll be a good fit for it.
The problem is that code it spits out on the fly is untested and untrustworthy. Identify the parts of your workflow that could be accomplished with regular code - write and unit test that code, with LLM help if you want, and use the llm as the orchestrator only.
Yeah, the problem is that I do not think the agents is good at reusing scripts and stitching it together.At least for me it's recreating to much similar. I hope we will see platforms like windmill.dev find the optimal solution for this. I have not been able to test it enough. But have a platform that gives you some observability out of the box and protect secrets from llm is nice
I'm not sure how much overlap there is between this project and Tectonic [1], but that's what I use personally for local PDF generation, and it's also using a good bit of Rust.
Just thought I'd mention since it's related and I really like the project.
In some ways software is really fundamentally different from things like baking or plumbing. Many bakers love the craft but nobody expects free baked good (except maybe their family). Many plumbers are true craftsmen and take pride helping solve peoples problems, but we don't expect free plumbing. On the other hand, once you write the code, the logic is complete, its closeness to an equation makes it feel like selling algebra homework.
More importantly though, baked goods get eaten, and pipes aren't assumed to suddenly become load bearing. I think a lot of developers hesitate to sell software they aren't prepared to support professionally. Toy projects then sometimes gain a community and grow organically. It's at this stage I feel we need a better path to funding without a lot of the capture that can occur.
It would be cool if we could "farmers marketize" software though. Come together to taste some exotic and local varieties. Maybe meet the local shops, pay for some overpriced TUI gizmo or a hash function with a weird pattern.
Sorry went into fantasy land there. This is obviously not the solution to the broader OSS funding issue, but it's a cute dream where maybe some people make a buck.
I think the bigger solution would have more opportunities for people outside of academia to get small grants to work on their projects. More foundations supporting the core technology and development that the tech world depends on now, and prospectively in the future.
Expecting everything for free is a 2000's thing, after the FOSS movement really took off.
Until then the options were we pay for software, we pirate it, or reach out to 30 day demos, freeware, shareware, public domain, which might come with source code or not.
Ironically, what is happening with many nowadays is a return to those days, with code available licenses, or open core.
> In some ways software is really fundamentally different from things like baking or plumbing
You were onto something with this but then got sidetracked.
The fundamental difference is that software (digital product) is cannot be given away and cannot be consumed, it can only be copied. Any other non-digital product, a bread loaf, a pipe, for someone else to use it, you have to give it away. You must not own the bread anymore so that the other person owns and uses it. Not the same with software since you never give away software, you give a free copy that costs nothing. Both you, the creator and the user now have a copy of the same thing and can use it indefinitely (this is the second difference, it is not consumed)
This is the fundamental difference that "disrupts" the classic capitalist economic flow. The proof of this disruption can be found in the continously changing pricing strategy of digital products and software, since companies are trying to adjust a fundamentally different product onto classical economic transactions.
The solution is a communist economy, where money won't be a transaction wall for product exchange and one's well being (as opposed to having to make money to live by)
I don't think it is this novel anymore though. Which easier and cheaper distribution, the same thing is starting to apply to other creative endeveours like music, art, or writing, where the costs of publishing start to get negligible, while there is an ever growing base of creatives who want to ofer their work to an audience.
And maybe we should treat Open Source more like other art forms, where you get patronage, or get small payments per download like in the shareware model.
While I find speculating about different models besides capitalism a good exercise, I also don't think these things are wholy incompatible with our current societal structure.
I am not sure what you mean by "this novel". It is a fundamental difference between all digital products (software, music, etc) and all other physical (non-digital rather) products. It doesn't have to be a novel difference to be important. This distinction in itself is enough to produce the effects we see on the economy. I mentioned the ever changing pricing strategy of such products as an example.
> I also don't think these things are wholy incompatible with our current societal structure.
Wholy incompatible? By no means, this industry is making tons of money. Surely it is compatible.
I think of it more like a handbreak. Yhe current capitalist societal structure is putting a hard limit to our potential as a society to fully leverege these technologies to better our lives. Open source is just a glimpse of what can be accomplished when money doesn't get in the way of our work exchange. And imagine what our humanity could have achieved if 1000x people did open source without having money issues.
Security minded generalists exist. They might move slower than you expect of a MFBS (move fast break shit) engineer, but you might also end up with fewer issues later.
there’s always some senior-ish person in the interview pool who is interested in security. hire them, let them figure things out and then give them permission to call bullshit on what you’ve done so far.
avoid hiring the “fanatics” tho. you don’t need E2EE everywhere.
The analogy you are replying to is pointing out that it works just fine for production, so if you should use it or not is simply a matter of taste.
More to the point, there is no objectively right answer of what stack you should use. There are plenty of objectively wrong answers, but compose isn't one of them.
Even the wrong answers, it comes down to who's dealing with the mess? Who's paying for the site's uptime? Who isn't getting paid while the site isn't up?
I think it's partially accurate, and partially a consequence of how async fractures the design space, so it will always feel like a somewhat separate thing, or at least until we figure out how to make APIs agnostic to async-ness.
I am a beginner to Rust but I've coded with gevent in Python for many years and later moved to Go. Goroutines and gevent greenlets work seamlessly with synchronous code, with no headache. I know there've been tons of blog posts and such saying they're actually far inferior and riskier but I've really never had any issues with them. I am not sure why more languages don't go with a green thread-like approach.
Because they have their own drawbacks. To make them really useful, you need a resizable stack. Something that's a no-go for a runtime-less language like Rust.
You may also need to setup a large stack frame for each C FFI call.
Rust originally came with a green thread library as part of its primary concurrency story but it was removed pre-1.0 because it imposed unacceptable constraints on code that didn’t use it (it’s very much not a zero cost abstraction).
As an Elixir + Erlang developer I agree it’s a great programming model for many applications, it just wasn’t right for the Rust stdlib.
One of Rust's central design goals is to allow zero cost abstractions. Unifying the async model by basically treating all code as being possibly async would make that very challenging, if not impossible. Could be an interesting idea, but not currently tenable.
One problem I have with systems like gevent is that it can make it much harder to look at some code and figure out what execution model it's going to run with. Early Rust actually did have a N:M threading model as part of its runtime, but it was dropped.
I think one thing Rust could do to make async feel less like an MVP is to ship a default executor, much like it has a default allocator.
They could still come in a step short of default executor and establish some standard traits/types that are typical across executors.
By providing a default, I think you're going to paint yourself into a corner. Maybe have one of two opt-in executors in the box... one that is higher resource like tokio and one that is meant for lower resource environments (like embedded).
I hate them, but not for perhaps the normal reason. I also hate the way delivery has progressed. Delivery used to be relegated to a few types of food which were designed better for it. Remember when your pizza came hot in an insulated pizza sleeve, not dropped at your entrance cold and sad?
We're making it more and more normal to completely avoid interacting with each other and having service be something we cherish, I think this is hugely detrimental to society.
I don't know what harm you are imagining happens here. Because the actual environmental harm of normal customer pick up is pretty large - its very inefficient to have hundreds of people converge on one location, rather then a single person or (robot) deliver many orders to the same hundreds of people.
these robots can take only one order at a time (at least here in Prague) since they don't have multiple separated compartments and it takes ages to deliver one order (company claims 14 minutes in 1-2km zone, but I have my doubts about their data, I assume at best they mean delivery time since pick up in restaurant to customer building), so it's much more efficient for people to walk that 1-2km or take public transport to pick up their food
I remember they proudly published their THREE robots delivered successfully 130 orders over 4 months period since trial started in December 2025, even over one month it would be less than two orders per day per robot, over 4 months the number is a bad joke, not even one order or day.
Though now they are planning sweeping robots which seems like much better use of robots doing something useful beneficial to everyone, not only to bunch of lazy hipsters.
> Because the actual environmental harm of normal customer pick up is pretty large - its very inefficient to have hundreds of people converge on one location, rather then a single person or (robot) deliver many orders to the same hundreds of people.
A few months ago, this was the reasoning that tipped me into looking into grocery delivery options.
Unfortunately, I'm not really interested in services using instacart/doordash/etc., but they've been driving the in-house delivery services out of the market. Of the ~5 grocery services here, 2 were always instacart/doordash, the formerly 2 best options abandoned their in-house services within the past year, and the remaining option is expensive enough that I'm not really motivated away from just driving over to the closest store myself. (The store with delivery is notably further away.)
I guess maybe that's just the cost of delivery outside of the gig model, and the dis-efficiencies of everyone driving to the store are externalized away...
This isn't true. The grocery distribution model is incredibly efficient in terms of transportation cost even if you drive there.
A delivery usually only transports a limited number of items and has to come from a far away hub. With grocery stores, the hub is less than 2km away and you usually buy more than half a shopping cart of goods.
Not the original commenter, but I share the same sentiment.
The harm done is that there is less human interaction outside one's bubble. Before deliver-anything-at-home you were forced to be in environments where you'd interact with people you'd normally not see.
At some level you're also building a connection if you're interacting with the same service employee or bump into a neighbor.
Strong communities is a massive boost to public health. It reduces loneliness and prevents all kinds of deceases (heart stuff, dementia, etc). It also builds a safety net so people don't have to rely so much on official healthcare.
Now obviously just some food delivery is not destroying community by itself. There's probably worse offenders out there. However, society becomes more and more parallel and robots do contribute to it. Unfortunately it's not something that's discussed a lot.
reply