This the core unspoken bone of contention in most AI arguments I think: most people either arent writing code with strict quality requirements or dont realize where their use of AI is violating them.
That said most of the world's most useful code has strict quality requirements. Even before AI 90% of SLOC would be tossed away without much if any use, 9% was used infrequently while 1% runs half the world's software.
Probably when combined with batteries it is half the price.
There are some colder areas in northern europe especially where solar doesnt work as well but they also tend to be better served for hydro (which can also store power).
Solar works also in the north, except in the winter of course, and it complements wind pretty well. So solar does make economic sense and is actively built in the north too.
Northern locales though have a much greater energy need for heating in the winter. So the "battery" solutions can often just be cheap heat batteries because there is not so much a thing as "waste heat" - that heat can be used directly without worrying as much about efficiency losses in conversion.
There are already a bunch of examples of Northern locales using these heat batteries - just heat up a big block of something when energy is cheap and solar/wind are overproducing, then use a network of insulated pipes to distribute that heated water.
The problem is that nuclear reactors are huge so you're never going to build that many of them compared to wind turbines (thousands) or solar panels (millions).
France plans to build a series of six reactors for its EPR2 programme with each reactor scheduled for completion 1-2 years apart, but that is only expected to reduce costs by 30% compared to the (hugely expensive) EPR.
Small modular reactors hope to improve things but it's far from clear they will end up any cheaper. Historically making reactors bigger makes them more efficient. The Rolls Royce SMR is just under 1/3rd of the size of the EPR so even if successful any cost reductions are not likely to be dramatic.
Europe was spending 200 billions / year on gas from russia. I imagine they could try to build 100 reactors for that price, but it would take a couple of years I imagine...
How much would it cost to build out batteries which cover entire continent's electricy needs for say three weeks (as there can be 2-3 week lulls of no wind and no sun in Europe in the winter)? Cause that sounds like a lot of batteries. Not to mention, if a freak 4 week lull occurs, we'll go back to Middle Ages for a week.
Australia's CSIRO studied this for Australia, renewables were half the cost of nuclear, factoring in storage and transmission for both renewables and nuclear (yes, nuclear also needs storage because energy demand varies with time). Australia is uniquely endowed with sun and land, so other countries/regions may arrive at different results.
Australia is also well endowed with coal and no carbon pricing, so for Australia the cheapest form of electricity production is a mix of solar + battery + coal.
Solar still produces even in overcast conditions, during the day. If it's light/medium overcast, most of which Germany usually is it still produces 50-80% of nominal. It only really doesn't produce anything at night or when it snows.
"But what if thing thing that never happens were to happen?"
We'd probably go deep into hydro, fire up every gas peaker plant, and through skyrocketing prices incentivize everyone to switch to emergency diesel generators where possible.
You're talking about a once-in-100+-years event. We'll deal with it the same way we dealt with the various oil crises.
"Based on the much-studied 1977 New York City blackout. ICF Consulting estimated the total economic cost of the August 2003 blackout to be between $7 and $10 billion"
(for Australia it is 5, for other countries it might be 8)
Once you get to that "nice to have" problem of what to do about the remaining 3% of power needs it would probably make most sense to synthesize and store gas (methane/hydrogen) from electricity when solar and wind is overproducing. Gas can be stored cheaply for long durations. The roundtrip efficiency is poor but it's still cheaper than nuclear power on the windiest sunniest day.
The nuclear + carbon lobbies would of course prefer to model green energy transitions by pretending that the wind and sun simultaneously turn off for 2 weeks at a time every year and that electricity can only be stored in very expensive batteries. This is not realistic.
It might not be quite that good in less sunny countries. Similar modest overbuilding of wind and solar in Denmark is simulated to get to about 90% with 12h of storage. This is still good enough though.
Hand on heart. When was the last time you built a serious production system for a real business that was 100% built from HTML without using any build step? Just editing the footer and header in every file when it updates (or using iframes)
Maybe not in your corner of the internet, but businesses used server side includes (SSI) for that, not iframes.
You add “include” tags to your HTML file and your web server like nginx or varnish would replace it with the fragment at runtime.
<!--#include virtual="../footer.html" -->
I saw this was quite popular for big publishing houses with millions of articles still relatively recently 10 years ago. They would only write the HTML body of a new article and the other fragments would be included by the web server.
Very cheap, stable, and very big changes across the whole website could be done instantly since cache invalidation is trivial (the web server knows all modified dates of all fragments).
Also, no additional CDN or caching needed. Later, with CDNs there was even a variant where these fragments were hosted at the edge (ESI).
10 years ago. I bet you can’t name a single production site writing HTML files without any build tools at all. Just raw dogging using notepad directly on the server.
That is the point. No one writes HTML without any abstractions anymore. You use a framework or a build tool. Because just editing pure html files is a pain in the ass. Probably haven’t done that since 2010.
The first time i used LLMs it was to try and refactor behind a solid body of tests i trusted.
I figure if it cant code when it has all of the necessary context available and when obscure failures are easily detected then why would i trust it when building features and fixing bugs?
I agree, the mechanical refactoring of modern IDE tooling, especially with typed languages is so much faster and safer, it's not even close. These tools can be useful for sure, but I think in general they are being wayy over prescribed to different tasks.
One of the nastiest aspects of migrating from docker to podman really is "what to do about docker compose?" coz there are three wildly divergent ways to answer that all of which really suck under certain specific circumstances.
Im no fan of docker and podman by itself is a step up but orchestration headaches are enough to ruin that.
I don't understand what you're asking here. The answer to that is probably nothing. That is unless you want:
- systemd to manage your containers
- You want to use K8s primitives (which are mostly compatible)
I'm unsure what the 3rd method is you're talking about. The nice thing about Podman's compose API is you don't have to change anything (mostly). You can point all your docker tooling to Podman's socket, and it'll (mostly) magically work.
* use systemd, red hat's favorite kitchen sink for handling everything from setting up sound services to mounting your home dir to logging so why not this too i guess.
* docker compose where i have to run a whole separate podman service to lie to docker compose about not actually being docker.
* podman compose which would be the obvious solution if it didnt just plain suck.
> * use systemd, red hat's favorite kitchen sink for handling everything
Systemd is a tool for managing services. Containers are services. Why require an entirely separate bespoke service manager when you're already running one?
> * docker compose where i have to run a whole separate podman service to lie to docker compose about not actually being docker.
This is the same system state as using docker compose with docker: you have a client program speaking to a backing daemon. Only difference here is the Podman service, being daemonless, only runs when needed (assuming you're setting up things the documented way by enabling the podman socket).
> * podman compose which would be the obvious solution if it didnt just plain suck.
Yeah I haven't had the best luck with it either. But part of the reason it's languished is that it makes more sense to just reimplement the Compose spec on the backend rather than re-invent the wheel and create a new compose client as well.
There's also the fourth option of writing Kubernetes yaml and applying that with `podman kube play`. Honestly this is probably closer to being the podman equivalent of docker compose but since it involves writing The Bad YAML (kubernetes) rather than The Good YAML (compose) most people don't use it.
It's a tool for user age verification that happens to be something you can use to manage services.
Did you miss my point about it being a filthy kitchen sink?
>This is the same system state as using docker compose with docker
One of the major selling points of podman is that you dont need a daemon. except maybe yes you do because podman compose sucks so toss that selling point in the trash.
This shit is also incredibly fiddly. Ever had "docker compose" and "docker-compose" do subtly different things which drive your team mate to pull their hair out? I have.
Podman should stop trying to piggyback off docker if it's trying to be an alternative.
>Yeah I haven't had the best luck with it either. But part of the reason it's languished is that it makes more sense to just reimplement the Compose spec on the backend
Personally I suspect it languished because Red Hat simply cant abide the idea that somebody out there might avoid using systemd for something.
They happily built a docker compose to quadlets converter but they cant bring themselves to make podman compose not be a piece of shit even though it wouldnt be a lot of work.
> It's a tool for user age verification that happens to be something you can use to manage services.
Good talk buddy.
> Did you miss my point about it being a filthy kitchen sink?
I suspect there's not really a point in responding to this since you've already made up your mind.
Nevertheless, yes I am aware the systemd project contains many modular components. Some of which are good (systemd-the-service-manager that is what I was referring to), some of them are bad, and some of them are just odd (still haven't wrapped my head around systemd-homed's purpose). Podman integrates with the systemd service manager, not the rest of the project, so I'm really not concerned about that: there is no point where I am unable to use quadlets because I don't have, say `systemd-timesyncd` installed.
On the gripping hand, Quadlets are just a systemd-generator so there's nothing stopping you from getting that exact same benefits of Quadlets with some other service manager. You'd just have to write that implementation (and probably your own bespoke service manager) and will probably miss out on some of the niceties systemd provides to anything it manages.
> One of the major selling points of podman is that you dont need a daemon. except maybe yes you do because podman compose sucks so toss that selling point in the trash.
You skipped the second part of my sentence where I reminded you that Podman is daemonless. There is no long-running Podman daemon/service/etc, it is spun up on demand and then stops when the action is done. Having a second process instance is not a daemon, and I'm not sure how you would have expected this to work otherwise.
> Ever had "docker compose" and "docker-compose" do subtly different things which drive your team mate to pull their hair out? I have.
..Take this up with docker?
> Personally I suspect it languished because Red Hat simply cant abide the idea that somebody out there might avoid using systemd for something.
> They happily built a docker compose to quadlets converter but they cant bring themselves to make podman compose not be a piece of shit even though it wouldnt be a lot of work.
I don't think `podman-compose` was ever an official Red Hat project. I don't think there was every really much interest in ironing out all the corner cases, especially before compose was actually fully specced, and once Podman itself implemented the spec the interest has been drying up.
Assuming you're referring to podlet[0] for the latter, that was never a Red Hat project.
>I suspect there's not really a point in responding to this since you've already made up your mind.
I suspect you ignored the point because you didnt want to address the point.
My repeating it seems to only have highlighted your wish to continue avoiding it here.
>Nevertheless, yes I am aware the systemd project contains many modular components.
Another red herring. "Modular" really isnt the point here.
It's certainly one way to justify throwing even more shit in an already overloaded kitchen sink though.
>You skipped the second part of my sentence where I reminded you that Podman is daemonless.
No, you skipped the part where I acknowledged that it was daemonless-by-default but you actually DO need to run a podman daemon if you're using docker compose with podman.
>Take this up with docker?
They're not responsible for podman trying to piggyback on their tools.
This is what stopped me from picking up Podman more, all our devs use Docker and have been writing compose files for years now. When the response at the time was "you're using Podman wrong, Quadlets are the hot stuff now" it just felt like too big a risk and commitment to jump to at the time. Have things settled more? Getting away from Docker is a bigger priority nowadays for us.
That said most of the world's most useful code has strict quality requirements. Even before AI 90% of SLOC would be tossed away without much if any use, 9% was used infrequently while 1% runs half the world's software.
reply