Crypto sucks energy and creates no value. It's complete and utter speculative garbage that also destroys the planet.
AI has real value. We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not, whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
> Crypto sucks energy and creates no value. It's complete and utter speculative garbage that also destroys the planet.
All of what you said is false.
Stablecoins are not speculative and have value, and you can send money at a low fee, low cost, worldwide to wallets on the same day, right now with far less energy than today's "AI".
> AI has real value.
What do you mean by "AI" specifically? LLMs in data centers?
The value in for this mysterious "AI" or even "AGI" paradise is not even for you. It is actually used against you.
> We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not
You understand that the current iteration of "AI" needs tens of gigawatts of energy and hundreds of billions of dollars and wasteful amounts of water which causes electricity prices in certain cities to skyrocket?
The way that it is financed appears to be close to fraudulent with vague "commitments" and mountains of debt that would take almost a trillion dollars in revenue to pay off the data centre build out.
> whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
Assuming after the data centers will be built (if they ever will be), can you name what are those new jobs that will be created from "AI"?
How much market cap is in stablecoins vs. proof of work crypto? Has Bitcoin gone down and disappeared due to the avabilability of stablecoin?
Bitcoin uses 200TWh (per year) probably pretty similar to all the AI usage today give or take. Certainly if we look at the area under the curve Bitcoin still by far has used more energy (~1000TWh) than AI/LLMs. For what is essentially a scam/pyramid scheme. And this is just Bitcoin. But yes, LLMs are using more and more energy (but also potentially with a larger % of renewable sources).
I mean AI in the colloquial sense. Large language models. It's ridiculous to compare the value produce by Bitcoin (negative- crime, money laundering, funding terrorist regimes, tax evasion etc.) to the value of LLMs.
LLMs enable people who couldn't produce software applications before to do so. This enables new business that didn't exist before. Those business hire people directly (including eventually software engineers) and create indirect jobs. This is no different than the steam engine or the Internet. You're arguing that the Internet took away the jobs of the people working in the post office because letters could now be sent electronically. I don't have a crystal ball but the historical experience teaches us that new jobs do get created and the economy is not a zero sum game. Maybe this will be different and maybe it won't.
Pretty much anything that needs performance and has a lot of relatively light operations is not a candidate for spawning a thread. Context switching and the cost of threads is going to kill performance. A server spawning a thread per request for relatively lightweight request is going to be extremely slow. But sure, if every REST call results in a 10s database query then that's not your bottleneck. A query to a database can be very fast though (due to caches, indices, etc.) so it's not a given that just because you're talking to a database you can just spin up new threads and it'll be fine.
EDIT: Something else to consider is what if your REST calls needs to make 5 queries. Do you serialize them? Now your latency can be worse. Do you launch a thread per query? Now you need to a) synchornize b) take x5 the thread cost. Async patterns or green threads or coroutines enable more efficient overlapping of operations and potentially better concurrency (though a server that handles lots of concurrent requests may already have "enough" concurrency anyways).
Server applications don’t spawn threads per request, they use thread pools. The extra context switching due to threads waiting for I/O is negligible in practice for most applications. Asynchronous I/O becomes important when the number of simultaneous requests approaches the number of threads you can have on your system. Many applications don’t come close to that in practice.
There’s a benefit in being able to code the handling of a request in synchronous logic. A case has to be made for the particular application that it would cause performance or resource issues, before opting for asynchronous code that adds more complexity.
Thread pools are another variation on the theme. But if your threads block then your pool saturates and you can't process any more requests. So thread pools still need non-blocking operations to be efficient or you need more threads. If you have thread pools you also need a way of communicating with that pool. Maybe that exists in the framework and you don't worry about it as a developer. If you are managing a pool of threads then there's a fair amount of complexity to deal with.
I totally agree there are applications for which this is overkill and adds complexity. It's just a tool in the toolbox. Video games famously are just a single thread/main loop kind of application.
There’s also a really good operational benefit if you have limits like total RAM, database connections, etc. where being able to reason about resource usage is important. I’ve seen multiple async apps struggle with things like that because async makes it harder to reason about when resources are released.
Basically it’s the non-linear execution flow creating situations which are harder to reason about. Here’s an example I’m trying to help a Node team fix right now: something is blocking the main loop long enough that some of the API calls made in various places are timing out or getting auth errors due to the signature expiring between when the request was prepared and when it is actually dispatched because that’s sporadically tend of seconds instead of milliseconds. Because it’s all async calls, there are hundreds of places which have to be checked whereas if it was threaded this class of error either wouldn’t be possible or would be limited to the same thread or an explicit synchronization primitive for something like a concurrency limit on the number of simultaneous HTTP requests to a given target. Also, the call stack and other context is unhelpful until you put effort into observability for everything because you need to know what happened between hitting await and the exception deep in code which doesn’t share a call stack.
The execution flows of individual async tasks are still linear, much like individual threads are linear.
Scheduling (tasks by the async runtime vs threads by the OS), however results in random execution order either way.
If there is a slow resource, both, async tasks as well as threads will pile potentially increasing response times.
Wether async or threads, you can easily put a concurrency limit on resources using e.g. semaphores [1]:
- limit yourself to x connections (either wait or return an error)
- limit the resource to x concurrent usages (either wait until other users leave, or return an error)
Regarding blocking the main loop: with async and non-blocking operations, how would something block the main loop?
And why would the main loop being blocked cause API calls being timing out? Is it single threaded?
> The execution flows of individual async tasks are still linear, much like individual threads are linear.
Think about what happens:
1. Request one hits an await in foo()
2. Runtime switches to request two in bar() until it awaits
3. Runtime switches to request three in baaz(), which blocks the loop for a while
4. Request one gets a socket timeout or expired API key
That error in #4 does not tell you anything about #2 or #3, and because execution spreads across everything in that process you have to check everything. If it was a thread, you would either not have the problem at all, it would show up clearly in request three, or you’d have a clear informative failure on a synchronization primitive saying that #3 held a lock for too long.
That makes it harder to control when memory is allocated or released in garbage collected languages, too, because you have to be very careful to trigger gc before doing something which can suspend execution for a while or you’ll get odd patterns when a small but non-zero percentage of those async requests take longer than expected (i.e. load image master, create derivative, send response needs care to release the first two steps before the last or you’ll have weird behavior when a slow client takes 5 minutes to finish transferring that response).
Arguably that’s something you want to do anyway but it dramatically undercuts the simplicity benefits of async code. I’m not saying that we should all give up async but there are definitely some pitfalls which many people stumble into.
No such thing. In a preemptive multitasking OS (that's basically all of them today) you will get context switching regardless of what you do. Most modern OS's don't even give you the tools to mess with the scheduler at all; the scheduler knows best.
That's not accurate. Preemptive multitasking just means your thread will get preempted. Blocking still incurs additional context switching. The core your thread is running on isn't just going to sit idle while your thread blocks.
There's features and there is quality and there is domain.
I worked on a team that built high precision industrial machinery. The team and the project manager decided to delay shipping because there were still problems. We delayed, fixed the problems, and the machine worked really well and was used for at least a decade. If we'd had shipped it too soon we would have to try and fix it at a remote site and likely it would suffer from problems.
With most products you want to figure out what is your MVP (minimal viable product) and what is the quality level your customers expect. If you ship something less than that it's probably not a good tradeoff. If you build too much and ship too late that's also not a good tradeoff. When shipping increments they also need to be appropriately sized and with the right quality level.
Ah, but you're talking about something else: hardware is quite different from software. Once your machine is out in the wild, you can't update it remotely. But with software, shipping MVPs and iterating is not only possible, it's almost always the right way to go about it.
I frequently tell my software teams "We aren't putting rockets in space; we're shipping an admin panel. We can revert code or change things if we don't like it."
I had something similar but convinced the other person the rest of the work can be done later. Then the person went ahead and did it despite the other instances having no use/value. Go figure. I guess having consistency has some value to argue the other side. I tend to be extremely flexible in terms of allowing different ways of doing things but some seem to confuse form with function insisting on some "perfection" in the details. I think this is partly why we get these very mixed reactions to AI where LLMs aren't quite "right" (despite often producing code that functions as well as human written code).
Consistency reduces the mental cost of acquiring and maintaining an understanding of a system. In a real sense, moving from one approach to two different approaches, even if one of them is slightly better than the original one, can be a downgrade.
Like many other things it's a judgement call. The break down occurs when people replace judgement with rules or "religion". This tends to happen when they don't have the experience of seeing the long term impact of decisions in various contexts.
In a way, simplifying the judgement call to the black-and-white approach “either you change all instances or none” without considering nuance is also a way of managing the mental overhead. Making a simple call lets you spend all your nuance energy in areas where it might matter more.
I agree that it’s also a way of accumulating technical debt, it’s all a bit of a tradeoff.
But then you end up with nit inflation, people feel like they need to fix the nits, and do, and there's no meaning to nit any more. I try to just not comment unless I feel there is some learning from the nit.
Lost me at dynamic languages. Don't build anything of any significance in dynamic languages! ;)
Some good points. Laughed at TDD is a cult. I mean a lot of software orgs/cultures are cultish (Agile, Scrum, whatnot). At work I often feel I'm part of a cult.
On the contrary, I find "The older I get, the more I appreciate dynamic languages. Fuck, I said it. Fight me." is exactly my sentiment too, with a caveat. I really like gradual typing, like python has. And not like ruby has (where it's either RBS files and it's tucked away, or it's sorbet and it's weird).
The worst code base I had to work in by far was a Python code base. Extremely difficult to refactor. Many bugs that were completely avoidable with static typing. I think maybe more modern Python is a little bit better but wouldn't be my choice for large projects. It's not just about correctness. It's also about performance. That code was so slow and that impacted our business.
Meanwhile the worst codebase I've had to work in by far is golang where someone clearly took the language's limitations as a challenge and not as an intentional constraint on writing clever code. And it's an impressive feat because I too have seen horrifying clusterfucks of python codebases with no typing whatsoever and very sloppy hygiene.
My take on static vs dynamic is that a sufficiently motivated programmer can make a mess out of anything they're given, and that types actually really don't help that much. Furthermore, "the types work out!" is also not actually an incredibly comforting fact to me. There are so many more places things can be wrong. And I also find that the types of errors static typing prevents tend to not be the most meaningful errors to prevent or the hardest to catch in subsequent testing, ESPECIALLY with gradual typing!
With python in particular, gradual typing with a checker gets you 99% of the benefits of static typing, with the HUGE added benefit of you just being able to tell the type checker to stfu when it's not adding value. ORMs and data parsing are so much easier in dynamic languages, for instance. And I find the most ergonomic ORMs and data parsers in static languages tend to be the ones that have gone to extraordinary lengths to make them feel like the stuff you just get much more cheaply in dynamic languages. I have recently been writing python with basedpyright and very intentional type hinting and it has been my favorite experience in a long time. More impactful to my productivity (real productivity - actually producing things that work and are real) than AI.
One rebuttal to that is that with the benefit of hindsight, to a first approximation zero percent of the code I've written in my career turned out to be "of any significance" really.
Same. That line about "your legacy is your family and friends" hit hard.
I've been coding professionally for >30 years. I don't think any of my code has survived 5 years in production.
I don't think code quality affected that at all - I know the really, really, shitty code I wrote when learning OOP in the 90's survived for a looong time, while the amazing code I wrote for a startup 2018-2021 died with it.
One of the projects I'm most proud of is still running ten years later, and has processed over a billion AUD through it in that time, with very minimal maintenance. I recently consulted on it, and sure enough it's still ticking along nicely! The code is honestly quite good too, even if it is PHP (though in a very nice microframework we wrote on top of Silex: removed all magic that a lot of these systems relied on. No annotations!)
I haven't doing this forever (only 10+ years) but surprisingly I think a majority of what I've written is still running. Probably a fair bit will continue to run for a while yet too I think (again, surprising for CRUD web apps).
Most code I wrote over my career got pretty decent use and produced value for customers. Some was used by millions of people. What I work on today is used by thousands. It's important that it is of reasonable quality with less bugs, decent performance, functionality users are looking for etc.
A lot of code makes a difference but I guess there's a lot that doesn't?
I'd guess, on average, code I've written has a half-life of maybe 3 or 4 years. There's pretty much none of my code (with a few surprising exceptions) that's still been running or in production anywhere for more than 8 or 10 years.
At the time, a lot of it felt "important" and "significant". And some of it probably was at the time, to the businesses I wrote it for. But whether I sweated blood and tears to craft the most elegant and efficient software I was capable of, or I phoned it in and just copy/pasted Stack Overflow answers together until I met some interpretations of a requirement to be able to leave the office on time - really made no difference.
I've been pondering lately, thinking about GenAI and vibe coding, with the very real risk of creating completely unmaintainable codebases - whether that matters, if the code is likely to be retired or rewritten in 3-4 years anyway? My current gig is on to the 4th rewrite of it's web/mobile app backend platform in 15 years, which started out as a Groovy on Grails app, which got rewritten in Java, then rewritten again in Java, and now it's being rewritten in Python. Each rewrite had fairly good reasons at the time, but a huge amount of code here gets thrown away every 4 years or so - which looking back makes me seriously question whether any of it was "of any significance". To be honest, the 2026 Python code really isn't doing anything notably different or more complex than the Perl and JavaScript code I was writing in 1996 - web work is CRUD apps all the way down.
As a person who started in a dynamic language (PHP; don't laugh, it's actually really good for web dev) and worked in an infrastructure team which had to do a lot of refactoring... I can't agree with the sentiment either. Dynamic languages _look_ good, but lightly typed languages like Go strike a much better balance in my opinion
> Don't build anything of any significance in dynamic languages!
Posted on a significant website built in a dynamic language.
I tend to disagree. Static typing can catch some bugs, but most serious errors are not type errors, and the common situation where the type system disallows just enough invalid states for developers to get complacent is the worst of both worlds.
I'm not a fan of dynamic typing at all (currently maintaining a decade's old e-comm monolith written in Ruby on Rails), but instead of arguing about what bugs are caught where, I've instead switched to arguing from a position of developer experience. The _tooling_ that statically typed languages have is levels above those found in dynamic languages. Runtime errors are runtime errors, but knowing at typing-time that the shape of thing A is what thing B needs is a huge benefit.
TDD is a cult. But knowing your pre-conditions an post-conditions for your isolated parts of your code is important. I think all your AI codegen will work better with this.
The entire AI ball of wax is built on python (dynamically typed) - or at least a large part of it. It probably needs to move to rust to save on power and compute cost.
The heavy lifting of AI is done by GPUs that are not running Python. But yes, a lot of orchestration and glue work is done by Python. Python can be a decent glue language and it has its place. But if the core/high performance logic of inference and training was written in Python then we wouldn't have today's AI. I imagine there are other languages in the mix.
Python is also the choice of non-programmers for simple work. Nothing wrong with that. But I wouldn't want e.g. my car's ABS system to be programmed in Python (or my browser or my OS or many other examples).
> But knowing your pre-conditions an post-conditions for your isolated parts of your code is important.
Design-by-Contract[0] is a formalization of this concept and well worth considering when working in code using mutable types. In addition to pre/post conditions, DbC also reifies class invariants (which transcend method definitions).
A lot of startups are cults. Tesla maybe the final form of a culted startup where the stock owners don't care about anything anymore.
That said, the people who change companies aren't the ones that believe that management ever had the best ideas, or are able to push back on the cult thinking with clarity. Unfortunately, though, it's not necessarily evidence that wins arguments, it's charisma, which is how the cult is started in the first place.
In my career I've seen endless examples of hopelessly badly designed software where no amount of optimization can turn it into anything other than a piece of garbage. Slow, bloated and inefficient.
Ascertain an issue is too late for bad software. The technical term is polishing a turd.
Not that what you're describing doesn't happen, people trying to make something irrelevant fast, but that's not the big problem we face as an industry. The problem is bad software.
Before you write code you design (and/or architect) a system (formally or informally).
There's too little appreciation today for a well designed system. And the "premature optimization" line is often used to justify not thinking about things because, hey, that's premature. Just throw something together.
Like everything else there's nuance and a range of appropriate behaviors. It's probably worth spending some time beforehand designing the next mars rover's software but it's real easy to get, say, the design of an ai based program editor wrong if you aren't getting user feedback.
Getting feedback from users for a product is important as well. Those are somewhat orthogonal concepts. I'm not proposing analysis paralysis or no prototyping but I am saying there are some things that if you didn't consider in advance can become huge issues down the road. There are examples (e.g. Facebook or the Google crawler) where very successful products started with something not great and then were able to fix that later but I would argue most of the very successful products and platforms (software or not) have had some non-negligible thinking/planning upfront.
I mean, sure, but this becomes kinda circular, "do some planning" "how much?" "the right amount", etc.
I don't think anyone is arguing for zero planning but in terms of very general rules we can talk about on blogs and such, I would definitely advise people to "do more and think less" to paraphrase an old prussian general.
I would say overplanning is a much more common problem but the issues it causes tend to be much less noticeable than the occasional really exciting under planned project.
- Over-optimizing for short term profit can hurt innovation and value creation.
- The economy is not a zero sum game and new value is created out of thin air.
reply