A weaker claim is that maybe there's no single silver bullet but there have been many bronze bullets that add up to more than a single silver bullet. I readily endorse at least that and think it's rather easy to justify.
The best convenience is that by the time of disclosure, the patch was already merged perhaps months prior and so sysadmins following a routine update schedule would have already updated to a version including the patch and thus have nothing to do. This relies on an assumption that a patch or series of patches aren't equivalent to a disclosure, so that a disclosure can be delayed from the patch, which is basically untenable in modern times.
Speaking just on timelines (rather than actual underlying innovations or improvements), 802.11 was in 1997, next in 1999, G in 2003, then a 6 year gap to N in 2009, 4 year gap to AC in 2013, 8 year gap to wifi 6 in 2021, wifi 7 in 2024 (though apparently buyer beware), and wifi 8 expected (according to the article) in 2028. Doesn't seem too rapid? The 8 year gap is the weird one out.
I think part of it is that if there isn't a regular and practiced process for bumping standards, then gaps between revisions can grow quite large and stagnation can set in, and if there are any significant improvements it'll take longer for them to come to fruition than if there were regular revisions that are only modest most of the time. Looking at a few other things that come to mind: USB had an 8 year gap between 2 and 3 as well, PCIe had a 7 year gap between 3 and 4 (albeit while they only had a 3 year gap between the specification for 5 to 6, it still took 3 more years (2025) for the first pcie6 devices, and I still can't buy a consumer-level pcie6 motherboard, it's a separate mess), C++ had an 8 year gap between C++03 and C++11, Java had a 5 year gap between 6 and 7 (and another 3 years after 7 to get to Java 8); all of these things now have more rapid cycles.
Is that something people want to get rid of? Back when I did some clojurescript people were pretty proud of being able to have it used automatically. What's the plan to get the same benefits? Or is the argument that the benefits aren't significant 15ish years on?
I would say the community is pretty evenly split between people who hate it, and people who find it practical. I don't see many people really championing it or being proud of it these days.
Well, technically I think most of the community in indifferent. But from the discourse about the topic, I feel like I see pretty even splits.
I'm a very happy Google Closure Compiler user, especially with the "advanced optimizations" flag. It does a level of code elimination and variable renaming at a level that no other JavaScript tool even approaches. Excellent software.
I think it gets a bad rap because you need to write your code in a certain way to avoid the optimizations breaking things. But if you're a disciplined developer, you'll reap some large benefits.
Last i checked, it needed JVM (parts of the library are in Java). Given there are many JS minifiers and optimizers (tree shaking etc.) avaliable in JS itself in 2026, I do not know why we need this huge overhead.
And for those of us devs, they never really went anywhere. vim was the most popular editor on HN 15, 10 years ago, still very popular 5 years ago, still popular today.. and that's just an editor, all the other tools like top and its descendents never went away.. I'll believe "TUIs are back" or in some kind of uprise when I notice my non-developer friends and family using them for anything. The most dominant UI today is the mobile app, that's not changing. Limited to professional use (i.e. doing work for someone) and not all use, TUIs aren't touching either web apps or native GUIs either.
And Unreal Engine 5 needs the Agility SDK, creating problems where games wouldn't run if your Windows version wasn't new enough. (Same as the typically encountered glibc problem of the user having an older version than the build needs, really.) (I think most of those particular issues were "solved" now with Win10 being EOL and so the developers just rub their hands of it and say "upgrade". Or use Linux and Steam, where no thanks to MS or the gamedevs themselves, games old and new can just work.)
Dependency hell comes for everyone, win32 may be stable but the broader ecosystem for Windows is little better than anything else. I say little because at least MS does still commit to a lot of backwards compatibility and ensuring some very old DLLs are still part of new Windows 11 installs.
As another comment notes some older Humble Bundle linux builds just don't work anymore on modern systems; some of those are just because they assumed a particular libjpg or libxml or whatever would be part of the base distro install and be around indefinitely. Bad assumption. But fixable the same way as missing DLLs from Windows builds.
If Dotcl does have good performance, it would be interesting to try running Coalton on top of it too. Coalton syntax is probably not unusual if you are familiar with OCaml and F#: https://github.com/coalton-lang/coalton (Though I'd expect the performance of the typical use case of running on top of SBCL to still be better.)
From the same project there's the recently released mine editor that's trying to be a friendlier gateway into trying Common Lisp (and/or Coalton) than emacs: https://coalton-lang.github.io/mine/ Time-to-first-SHOUTING is still once you start a REPL though -- it tells you that your package (namespace) is CL-USER. I sort of think it's one of those things that grows on you, or at least isn't annoying after a while (until you need to deal with certain foreign function interfaces anyway), and it's an interesting possible convention to use SHOUT-CASE in docstrings to call out specific parameters or other function names instead of some @param, \param, @link, or what have you.
Re that last: FWIW, in Emacs Lisp (which is case-preserving and mostly lowercase by convention, without the legacy symbol case behavior of CL), docstring convention is to use single quotes for most literals and to use all-caps to mean the value of a local symbol—usually a function argument, but sometimes a variable introduced in running text for describing the structure of data or such. Last I checked, CL wasn't as consistent across projects, but I tend to carry the Emacs convention there when not conforming to a different local style, and wonder sometimes who would have their monocle pop off to see it…
Being paused in the debugger is per-thread. If the server's using a thread-per-request model, and you're stopped in the request, then other requests can proceed just fine. If some of those requests also trigger the debugger, they'll pause and have to wait, they won't interrupt your current debugging view. Extra care should be taken in any sort of production debugging, of course. (At a Java BigCo, production debugging was technically allowed but required multiple signoffs, the engineer wasn't the one in control but had to direct someone else, lots of barriers to prevent looking at arbitrary customer data, and of course still limited to what you can do with a standard JVM restarted in debug mode. (Mainly setting breakpoints and walking stack traces.))
But the nicest part is that once you connect to the production application, apart from network lag it's no different than if you were developing and debugging locally on similarly specced hardware to the server, you have all the same tools. Many of the broader activities around "debugging" don't need to happen in a paused thread that was entered with an explicit breakpoint or error, they can happen in a separate thread entirely. You connect, then you can start inspecting (even modifying) any global state, you can define new variables, you can inspect objects, you can define new functions to test hypotheses, redefine existing functions... if you want all requests to pause until you're done, you can make it so. Or if you want to temporarily redirect all requests to some maintenance page, you can make that so instead. A simple thing I like doing sometimes when developing locally (and I could do it on a production binary too) is to define some (namespaced) global variable and redefine a singly-dispatched method to set it to the self object (possibly conditionally), and once I have it I might redefine the method again to have that bit commented out just so I know it won't change underneath me. Alternatively I can (and sometimes do) instead set this where the object is created. Then I have a nice variable independent of any stack frames that I can inspect, pass to other method calls, change properties of, whatever, at my leisure without really impacting the rest of the program's running operation. Another neat trick is being able to dynamically add/remove inherited mixin superclasses to some class, and when you do that it automatically impacts all existing objects of that class as well. Mixin classes are characterized by having aspect-oriented methods associated with them; you can define custom :before, :after, or :around methods independent of the primary method that gets called for some object.
What makes you think it falls flat in a team setting? There are plenty of N-pizza-sized teams successfully using Lisp to this day and you're probably aware of many teams successfully using Lisp in the past, too. There's also the success of Clojure. What's required to have a well functioning team is mostly programming language independent; Lisp itself won't save a team lacking those properties anymore than say Java would.
Did you even read what I said or who I responded to? I am specifically talking about working inside an image, monkey patching functions and structures live in the running image. A practice almost no one uses anymore and of which I said that as a single dev on a project I use and find convenient, but I would not want to use it in a team; for that, modern workflows with versioning, beaming code, ci/cd, dev containers etc are preferred.
I prefer lisp over most other things in life, and so does my team. I was specifically not talking about the language though.
I've frequently said that Java + JRebel gets the closest to the Common Lisp + slime experience (closer than Python) but as you say the Lisp experience is still superior, the Java ecosystem has yet to close the gap*. The widest part of that gap I'd mention is in not having the condition system built-in to Java (though I'm aware people have tried to make a comparable one as a library), lacking it degrades the debugging experience considerably (even though simple step-debugging is typically more pleasant than in Lisp). IntelliJ's drop frame feature isn't good enough. The other problem is needing Java + something. What you get with just a regular JVM running under your IDE is no better than what other languages offer (if they offer anything) as their cute hotswap/hotpatch feature and comes with big limitations. (Like no changing method signatures or no adding/removing methods or properties, or only applying changes to new objects.) Once you're doing something non-trivial, especially if you're trying to incrementally develop your program rather than just debug one specific problem, you'll have to restart. In contrast Common Lisp's got its disassemble, describe, inspect, compile, fmakunbound, ... all being functions callable at runtime, and update-instance-for-redefined-class is part of the standard language too. Support for live reloading of everything is baked into the language rather than a hack on top, slime is just a convenient way of working with it. It's still convenient to restart the program occasionally, but few things force you to.
Unfortunately JRebel has killed their free tier, so I'd now point unwilling-to-pay programmers to something like https://github.com/JetBrains/JetBrainsRuntime which is IntelliJ/Eclipse/whatever-independent. I haven't tried it myself yet though... Given they only address the biggest class reloading concerns, I doubt it's actually comparable to JRebel for business-world Java. JRebel handles among other things dynamic reloading from XML changes and reinitializing autowired Spring beans that other classes use for dependencies.
*Caveat, I've been out of the professional Java grind for a while, I'd be pleasantly surprised if some new version that's come out contradicts me.
A weaker claim is that maybe there's no single silver bullet but there have been many bronze bullets that add up to more than a single silver bullet. I readily endorse at least that and think it's rather easy to justify.
reply