What? You can literally just download an exe from any website and run it.
If you're complaining that Valve owns a big list of games and a ton of eyeballs, and not being on that list means those eyeballs don't see you when they look at that list, idk what to tell you because they seem to have earned that part of their business pretty fairly.
"many engineers have not yet tried things that became feasible over the last months"
I have heard this statement every single day for 2 years and yet we still have no companies compressing 10 years into 1 year thus exploding past all the incumbents who don't "get it".
which is a pretty large caveat. Anecdotally, I've found my side projects (which are solo greenfield projects, and don't need to be supported to the same standards as enterprise software) have gained the boost the GP was talking about.
At work, it's different, since design, review, and maintenance is much more onerous.
If you want an example of a project that condensed 5 years into 6 months and exploded past the competition I suggest looking at OpenClaw.
The first line of code was written on November 25th. It achieved adoption in the "personal agents" space that far exceeded the other companies that had tried the same thing.
(Whether or not you trust the quality of the software you can't deny the impact it had in such a short time. It defined a new category of software.)
OpenClaw is definitely not a "5 years" project pre-AI though. That was more like a month of greenfield work compressed into a weekend -- which is still really impressive, don't get me wrong! -- but I think the point is we're not seeing mature, legacy codebases get outcompeted by new, agile, AI-driven codebases; we're seeing greenfield projects get spun up faster. Which, again, is still impressive and valuable.
If agents could really compress 10 years of development into 1 year, you'd see people making e.g. HFT platforms and becoming obscenely rich, not making a fun open-source project and getting hired by OpenAI as an employee.
Didn’t we learn anything from the past? Using loc or number of commits or github stars to measure success or productivity is so backwards. It seems everyone on the AI wagon is either young (and so they don’t know our history) or simply forgot about all the good practices in software engineering
That latter bit is my experience. As soon as AI enters the equation, we have to immediately ignore everything we ever learned and just type text into the prompt box, or you're not doing it right.
No it isn't. There's basically no upper bound on the number of commits an LLM can generate. If the LLM takes 10,000 commits to do what a human would do in 10, then the comparison is meaningless.
I don't know anything about the code quality of OpenClaw, but telling me the number of commits tells me precisely nothing of use.
OK, now do that for 369,293 stars, 76,193 forks, 138 releases and 2,133 contributors.
I expect there is no number I could bring up here that won't be instantly shot down as telling "precisely nothing". My mistake for bringing up any numbers at all.
OpenClaw is a good example of a completely new project written using coding agents that made a significant impression on the world and would not have been built without them.
I'm surprised this is a hill I have to die on, but there we are.
(I'm not even a user of OpenClaw! I don't think it's secure or safe enough to use in my own life.)
> OpenClaw is a good example of a completely new project written using coding agents that made a significant impression on the world and would not have been built without them.
Nobody is denying that OpenClaw is popular, and nobody (in this thread, at least) is denying that AI rapidly speeds up the ability to make an initial release or prototype for greenfield projects. But the comment that spawned this discussion was:
> we still have no companies compressing 10 years into 1 year thus exploding past all the incumbents who don't "get it".
The issue is that you're extrapolating OpenClaw, which upon release was a month of pre-AI development work compressed into a few days, to cover the "10 years into 1 year" scenario. However, this isn't appropriate because software development is non-linear. As anyone who has worked on a greenfield project pre-AI should know, those first weeks and months have much faster development cycles. There's no tech debt to worry about; there's no urgent bug tickets or feature requests from customers; there's no thinking about whether it's okay to ship a breaking change.
> OK, now do that for 369,293 stars, 76,193 forks, 138 releases and 2,133 contributors.
You're counting forks and stars as code metrics now? Oy.
Look, those aren't nothing -- they're a decent enough proxy for popularity -- but they aren't a rebuttal to the original comment. (The other day some LLM dudebro got a bajillion stars on GH for his vibe-coded hot mess of a repo that sets three environment variables. I should go check the number of commits on that...)
> OpenClaw is a good example of a completely new project written using coding agents that made a significant impression on the world and would not have been built without them. I'm surprised this is a hill I have to die on, but there we are.
The fundamental problem here is that you were asked to provide an example of some software where LLMs have made a revolutionary difference, and OpenClaw is what you chose. That just says a lot, right there.
I don't even really care about that debate, since OpenClaw probably meets the literal requirements of the original question (if not the spirit), and sure, it's had a big splash. But the point of the OP is well-taken: everyone is so "productive", but if the only thing we're seeing from it is Moltbook and 10,001 half-broken pokemon games, then eventually the bloom is going to fall off the rose.
The fact that you felt you had to rebut the "I could do that in a weekend" guy with commit counts is both poetic and oddly fitting for where we are with these things.
I stand by what I said. OpenClaw proved that "personal digital agents" are a category with a huge amount of demand, to the point that people will jump through major hoops and completely ignore the colossal security risks involved in adopting that software.
It's spawned dozens of imitations, some of which are looking quite credible.
Anthropic themselves have been cloning OpenClaw features.
I get that it's not cool to say "OpenClaw is significant and influential" but I truly believe that it is.
> 41,964 commits is a lot more than "a month of greenfield work".
I meant a month for the initial release, not current state.
Regardless, much like lines of code, number of commits is not a good metric, not even as a proxy, for how much "work" was actually done. Quickly browsing there are plenty[0] of[1] really[2] small[3] commits[4]. Agentic coding naturally optimizes for small commits because that's what the process is meant to do, but it doesn't mean that more work is being done, or that the work is effective. If anything, looking at the changelog[5] OpenClaw feels like a directionless dumpster fire right now. I would expect a lot more from a project if it had multiple people working on it for 5 years, pre-AI.
Ideally, the given example would be something not ajacent to the presently white-hot category of "AI agents".
Like, look at e.g. YC minus the AI and AI ajacent companies. Are those startups meaningfully more impressive or feature-rich as compared to a couple years ago?
Not yet, no. I think that's because coding agents got good in November, most people didn't notice until January and it still takes 3-4 months to go from idea to releasing something.
I expect we will start seeing the impact of the new coding agent enhanced development processes over the next few months.
Its trash vibecoded markdown files around pi. This exemplifies well what op’is saying. We are at the ICO stage of llms. Hopefully there wont be an nft one
As much as I love to hate on AI: even the bad apples still produce something that one can reasonably work with.
Cryptocurrencies? Barely any other use than money laundering, buying drugs and betting on the outcome of battles in war. And NFTs? No use at all other than money laundering and setting money ablaze.
crypto is a few 100 of billions less big as an industry than GenAI is. I guarantee you that AI is a far better money laundering scheme. mb the two better money laundering schemes would be construction business and the global warming business. doesnt mean that some of the stuff produced is good.
The condensation argument is totally true.... Strikes me though the other metric Id look at is how long code survives before being re-written. Feels like for that one a bit early to tell...
Honestly, the most impactful thing I've seen AI do for any workplace is serve as the ultimate excuse for whatever pet thing someone's wanted to do, that can't stand on its own merits, and what they really need is a solid excuse.
Rewrite that old crunchy system that has had 0 incidents in the last year and is also largely "done" (not a lot of new requirements coming in, pretty settled code/architecture)? It's actually one of our most stable systems. But someone who doesn't even write code here thinks the code is yucky! But that doesn't convince the engineers who are on-call for it to replace it for almost no reason. Well guess what. We can do it now, _because AI!!!_ (cue exactly what you think happens next happening next)
Need to lay off 10% of staff because you think the workers are getting too good of a deal? AI.
Need to convince your workers to go faster, but EMs tell you you can't just crack the whip? AI mandates / token spend mandates!
Didn't like code reviews and people nitpicking your designs? Sorry, code reviews are canceled, because of AI.
Don't like meetings or working in a team? Well now everyone is a team of 1, because of AI. Better set up some "teams" full of teams of 1, call them "AI-first" teams, and wait what do you mean they're on vacation and the service is down?
Etc. And they don't even care that these things result in the exact negative outcomes that are why you didn't do them before you had the excuse. You're happy that YOUR thing finally got done despite all the whiners and detractors. And of course, it turns out that businesses can withstand an absurd amount of dysfunction without really feeling it. So it just happens. Maybe some people leave. You hire people who just left their last place for doing the thing you just did and now maybe they spend a bit of time here. And the game of musical chairs, petty monarchies, and degenerate capitalism continues a bit longer.
Big props to the people who managed to invent and sell an excuse machine though. Turns out that's what everyone actually wanted.
I've written my own DNS server and my own "I can't believe it's not docker compose and kind of ansible", among other more and less esoteric but consistently dubiously motivated enterprises
They all inevitably ended up with me using them for a good 1-2 years, while also gaining an appreciation for whatever I thought was over-complicated about the OSS version (not ansible tho. But maybe pyinfra.). And I learned soooo much. I can't tell you how helpful it's been both here and there while designing architecture for actual work ("this is basically just DNS") and especially in interviews, both just having something to talk about, and also having a story arc to it showing lessons learned etc. plus it's nerdy which is always helpful branding as long as you done come off too weird or boastful along the way.
"It’s easy for long-term strategic, high-impact work to sink to the bottom of everyone’s todo list."
"[...] But one where the tasks to accomplish the project are not anyone’s full-time job."
Sounds like the organization's leadership are incapable of balancing short term and long term goals, and it's falling to people who are paid less to "step up" and try to swim against the current for the good of the company.
or
Whatever the author is talking about is some engineering pipe dream disconnected from actual business value, and someone is dragging a bunch of other people semi-willingly along trying to execute on it without a mandate/funding from leadership.
Impossible to say which from the outside. But I've known several instances of both cases.
That’s basically the face of GMOs, so it is an issue for GMOs. GMOs for whatever reason have a terrible ambassador and I haven’t seen evidence to the contrary.
For vaccines, a good portion of the population remember vaccines being developed and marketed to help people. Then there are immigrants that remember more recently how life changing vaccines are.
On the contrary, I find "The older I get, the more I appreciate dynamic languages. Fuck, I said it. Fight me." is exactly my sentiment too, with a caveat. I really like gradual typing, like python has. And not like ruby has (where it's either RBS files and it's tucked away, or it's sorbet and it's weird).
The worst code base I had to work in by far was a Python code base. Extremely difficult to refactor. Many bugs that were completely avoidable with static typing. I think maybe more modern Python is a little bit better but wouldn't be my choice for large projects. It's not just about correctness. It's also about performance. That code was so slow and that impacted our business.
Meanwhile the worst codebase I've had to work in by far is golang where someone clearly took the language's limitations as a challenge and not as an intentional constraint on writing clever code. And it's an impressive feat because I too have seen horrifying clusterfucks of python codebases with no typing whatsoever and very sloppy hygiene.
My take on static vs dynamic is that a sufficiently motivated programmer can make a mess out of anything they're given, and that types actually really don't help that much. Furthermore, "the types work out!" is also not actually an incredibly comforting fact to me. There are so many more places things can be wrong. And I also find that the types of errors static typing prevents tend to not be the most meaningful errors to prevent or the hardest to catch in subsequent testing, ESPECIALLY with gradual typing!
With python in particular, gradual typing with a checker gets you 99% of the benefits of static typing, with the HUGE added benefit of you just being able to tell the type checker to stfu when it's not adding value. ORMs and data parsing are so much easier in dynamic languages, for instance. And I find the most ergonomic ORMs and data parsers in static languages tend to be the ones that have gone to extraordinary lengths to make them feel like the stuff you just get much more cheaply in dynamic languages. I have recently been writing python with basedpyright and very intentional type hinting and it has been my favorite experience in a long time. More impactful to my productivity (real productivity - actually producing things that work and are real) than AI.
reply