Hacker Newsnew | past | comments | ask | show | jobs | submit | Plasmoid's commentslogin

Yeah, but what's the burn rate?

If it's going down at 1 day per week then it's not so bad. If it's closer to 0.75 days per day, that's much more serious.


> I administrate are contractually obligated to be so isolated

Yeah, I've seen those contracts. They just reference a SeCuRiTy doc that's 20+ years old, and has never been re-evaluated. Things are secure because they follow the doc, not because they have actually evaluated the reasonable attack space.

I've fighting customers for years on their ideas of proper TLS usage and it's always the same thing. They've got a security doc that never changes and has never evaluated any of the trade-offs. Almost to the point that the people who wrote them choose things that increase downtime and KTLO work without helping security.


Ah-yup. The equivalent in my world is contracts that insist we make our employees rotate their passwords every 2 months or whatever, which was a popular (but still dumb) idea 20 years ago and is strongly recommended against today.

Yep. I get real tired of adding a month and year to the same base password every time I need to rotate it.

On week one of my current job, I turned that off for the whole company. Here's the citation you can give your security department to show them why they're doing it wrong.

NIST Special Publication 800-63B, the July 2025 version, section 3.1.1.2, says:

"Verifiers and CSPs SHALL NOT require subscribers to change passwords periodically. However, verifiers SHALL force a change if there is evidence that the authenticator has been compromised."

The previous version from June 2017, section 5.1.1.2, says:

"Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator."

So 9 years ago, NIST said to stop requiring that. Last year, they clarified that to say, no, really, freaking stop it. Any company still making people do that today is 9 years out of date, and 1 year out of compliance.


> keeping backward operational compatibility

It is not possible to be backwards compatibility with a larger address space


You are right that a 32 bit ipv4 stack can not understand a 64 bit packet format. The thing I am trying to get at is not native compatibility, it is operational compatibility via translation. I know, I know, you will probably say that is what ipv6 bridges do.

But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space. This would allow translation at network boundaries and let old systems continue to operate unchanged. Then the routers and systems would be upgraded incrementally. I think that is why it would have been upgraded more quickly.


> But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space

IPv6 supports that, but it ended up not getting used very much.

See https://en.wikipedia.org/wiki/List_of_IPv6_transition_mechan...


I remember reading about that a long time ago. I wonder why it never really caught on?

I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.


I don't see how it matters we forced people into ipv6 as well. Who cares. It's more about the difference in mental models that prevented adoption especially among those who run the services that are on the internet.

Your proposal (translation) is addressed as point 3B in the article.

I went and re-read point 3B. I agree that some hypothetical ipv42 faces a translation problem.

But it does not follow that address design is irrelevant. The structure of the address space directly determines whether translation can be stateless and alogrithmic.

In a hypothetical ipv42 design that preserves a deterministic embedding relationship between old and new addresses, translation at the edges could be largely stateless and mechanically reversible, to reduce coordiation overhead between operators and it makes reachability more predictable.

In our world ipv6, the transition seems to require a mix of dual stack, nat64, dns64, tunneling aproaches. The mapping between ipv4 and ipv6 is not uniformly deterministic across all deployment contexts.

Also, there is just a human factor. The mental gymnastics that go on. The perception of what is the way forward? With ipv6, it feels like everyone has to go get their ipv6 stack in order. With a hypothetical ipv42, where the ISPs and backbone providers can throw in the translation layers, it feels like, to me, they would have gotten on board much more quickly. Yeah, I know, it is just a feeling.


I agree with you about the embedded addresses, and I don't understand why the space was moved to all zeros to a bunch of other mappings.

but the utility of this isn't that high. we already know how to handle 4-4 and 6-6 traffic just fine. but if a 4 host wants to talk to a 6 host, it just doesn't have the extra bits in order to describe it, so this just doesn't facilitate 4-6 endpoint communication at all. this is true even you substitute v6 with any other layer 3 with a larger address space.

where it does help is in a unified routing backbone, that would allow v4 prefixes to be announced in the v6 routing system. which is arguably useful.


We have that, it's called ipv6. A section of the v6 address space is sectioned off to hold all v4 addresses

The embedding I believe you are referring to is not a part of the global routing model. (maybe I am wrong?) What I am describing is making that kind of declaration central to the system in a deterministic, network wide mapping of ipv4 to the larger ipv6 space. The translation in ipv6 ended up being handled by a mix of mechanisms after the fact, rather than a single, uniform mapping model that tied directly to the address structure. I think part of the problem is they did not put that front and center, at the beginning, when doing the initial specification.

How would an embedding handle the other 99.999999999999% of addresses not embedded?

At least at first, you wouldn't, you'd embed all of them. Cloudflare has 1.1.1.1, so they get 1.1.1.1:: too.

Not doing that was one of the key points of starting fresh with IPv6. Doing that would mean that you could end up with billions of routes to consider.

One reason for large address space is that those with networks could be placed sparsely and left room to grow. Thus allowing less routes in general.


Indeed doing it this way would keep the fragmentation, or at least delay fixing it. That's what these articles always overlook, the goal of ipv6 wasn't to just add more bits, it was also to defrag the routes.

I think instead of 1.1.1.1::, you could do 4:1.1.1.1::, wait for v4 to be gone, then start building new topologies in the other /8s. Not sure how hard that is, but it seems easier than what they're trying to do now.


Would it help at all? You can't just send IPv6 packets down the equivalent IPv4 path because that bext-hop router probably xoesn't understand IPv6 packets. In fact there could be no IPv6 path at all between you and the destination, so knowing where they are still wouldn't help you forward packets. If it understood them, it would have given you an IPv6 route anyway. Updating BGP to support IPv6 routes wasn't an actual problem.

There are lots of services I can't send v6 to, not because some router in the middle only understands v4 but because the service operator decided not to deal with v6.

So the idea is to surreptitiously install software on the service operator's machines that they can't disable?

It's already a bit like that, but they can and do disable it. You can see the other comments in this thread: many people disable IPv6 upon any sign of a networking problem.


No, the idea is you can turn v6 on/off, but doing so only changes the packet format and nothing else at first. There's no separate place to configure v6-specific settings because there are none. You use the same address, routes, DHCP, NAT, DNS, etc as v4, but you're limited to 32-bit addrs at first. The point is to just get people off v4.

Once v6 has reached enough adoption, you can turn off v4. Those who want to keep the addrs from v4 can, except now they get way more addresses under those too. Others can start building a clean new topology under the other prefixes without worrying about compatibility.


I don't see why anyone would change all the bits you actually need to change for some nebulous future gains. Still have to deal with new sockets and new routing decisions at least. To not really gain much from new features.

To me it looks like something that would have gained nearly no actual adoption outside some toy examples. Later you will need to anyway get new DNS, DHCP(or alternative) and so on.


That's a legit concern. If that's not interesting enough to the kind of user that wants all-new v6, instead start from today where some users are on the new v6 network, and say they added the 4:: prefix as a way to pick up the kind of user that doesn't want to change much. They'd still be compatible eventually. Though the reason I was thinking 4:: from the start would've been attractive enough is, a lot of people did use 6to4 and other halfway measures despite having no immediate gain.

Today's DNS6 DHCP6 etc are totally incompatible with v4. 4:: buys backwards-compatibility. Each can be updated to support longer addrs without caring whether you use it with v4 or v6.


> At least at first, you wouldn't, you'd embed all of them. Cloudflare has 1.1.1.1, so they get 1.1.1.1:: too.

Everyone with an IPv4 address automatically got an IPv6 allocation:

> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.

> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.

* https://en.wikipedia.org/wiki/6to4

What does it mean to have an /48? Well, a IPv6 subnet is /64, so that's 16 bits for subnets. In IPv4 land, if you take a subnet to be /24, an allocation with 16 bits worth of subnets would be a /8.

So basically, with 6to4, every person with an IPv4 address got the equilvalent of a Class A in IPv6.


This is a fake argument. Noone is arguing for backwards compatibility.

But there was also no necessity to demand reshaping networks and changing address assignment in a way that made migration extremely work intensive and hard to deploy in parallel.


How would you do it?

I wouldn't try to reinvent DHCP, kept NAT and generally attempted to keep the overall shape of a v6 network the same as v4 networks to ease transition of large deployments.

Ipv6 now has most of that - after years of resistance - which results in a mixed mess of "several ways to do it" approaches spiced with clients and equipment supporting a random set of them.


And yet 50% of the internet is using CGNAT just fine. The extra bits are just in a different place.

Yes, but CGNAT is an inherently stateful system and as a result will always be more expensive to operate per packet than a stateless router. The reason we are seeing steady (if slow) growth in native IPv6 is because the workarounds for IPv4 exhaustion cost money, and eventually upgrading equipment and putting pressure on website operators to support IPv6 becomes cheaper than growing CGNAT capacity.

Because there are so many applicants that have good grades.

A more cynical view is that the governing boards want a way to pick and choose who they let in. So they create "holistic" application systems to get "360 degree view of the candidate".


No matter how many have good grades, you can always pick the top n by grades—unless there's a ceiling that the top m > n have all hit. Which, if you're talking about "grades" as in GPA, is plausible.

MCAT seems more relevant, though. According to Claude: "Roughly 0.1% or fewer of test-takers score a perfect 528 in any given year — typically only a few dozen individuals out of the ~120,000 or so who sit for the exam annually." So it should work fairly well for them to sort by MCAT and take however many they have (or expect to have) room for.


I think OP's point was that the governing boards don't want the people with the top n grades. They want certain people, and by making the admissions criteria fuzzy, they can pick and choose those certain people and then say "well, our admission criteria is subjective," and "we are looking for 'well rounded people," and all kinds of other vague weasely ways to let them legitimately shape the student body in the way they want.

See also: "Cultural fit" when hiring.


One of my roommates who was premed had a "hot car" poster as a motivational study aid. After a short term as a candy striper at a local hospital, he changed majors. The system works! ;-)


At a certain point, grades become arbitrary and won’t necessarily select for the best candidates. Obviously the current system doesn’t, either.

The actual solution is to increase the number of slots for training doctors to match the huge number of qualified applicants. It makes even more sense given that there is a shortage of doctors and health care costs are astronomical.


I want a doctor who was a strong student with diverse experiences, lots of soft skills and can handle the entire psychological spectrum of being a doctor, not the doctor who was solely the best at exams.


There are all kinds of doctors though? The ones who don't have soft skills or diverse experiences can go into pathology or other fields that don't involve as much patient interaction. Why lose out on their gifts altogether if they're genuinely interested in medicine.


> No matter how many have good grades, you can always pick the top n by grades. Which, if you're talking about "grades" as in GPA, is plausible.

I live in Ontario and we're there. 40% of Waterloo students had above a 95% average in high school. The average GPA to get into UofT med school is 3.94/4.00 GPA.

What has happened as a result is students killing themselves and each other. If you fail one test in any course, you cannot move to the next level.

So, if you go on the UofT subreddit there's endless stories of pre-med students sabotaging each other. Faking friendliness, destroying notes, etc etc. This is arguably rational because the pool is small and there's little to gain by studying harder if you already have a perfect GPA.

https://www.reddit.com/r/UofT/comments/1sbu811/had_no_idea_t...

You don't want this type of person as a doctor. They will sabotage others because that is how they got ahead in the past. In a medical environment that kills people.


Too many kids want to be doctors and have the grades for it? That's an opportunity, not a problem.

Training more doctors is just never an option for some reason.

Don't build systems that reward amoral psychopaths.


We've opened a new med school after a decade of planning. 1.5% acceptance rate.


> This is arguably rational because the pool is small and there's little to gain by studying harder if you already have a perfect GPA.

So there is a low ceiling, and if they instead used MCAT or something with a higher ceiling (where, apparently, the number of perfect scores is about 50 per year—in America, presumably lower in Canada due to population size), then studying harder would benefit them. That seems like a much better outlet for competitive urges.

But also, how small is the pool of qualified applicants? If there were something like "they're going to take n people from your school, at which there are 30 plausible candidates", then sabotaging one might conceivably be worthwhile. But if the pool is—well, Google says 3,000 medical students get accepted each year in Canada (and the qualified applicant pool is presumably at least somewhat larger), and sabotaging one person is extremely unlikely to help you personally. (This is one case where it's good that the expected-value "benefits", of sabotaging person X, are widely distributed among thousands of medical candidates, and thus it's a "free-rider problem" where no individual candidate has a strong motivation to do the work.)

Is there some multi-stage thing where they pick 10 people from each high school, or 30 from a town, or something? Or is there major grading on a curve, or a big benefit for being the top person in your classroom of 15? That seems like how you would get real incentives for this backstabbing behavior. Otherwise, I can't see how it's rational (even to a complete sociopath), and would have to chalk it up to individual miscreants and possibly some kind of culture that encourages it in other ways.


> Or is there major grading on a curve, or a big benefit for being the top person in your classroom of 15?

Yes. UofT even has "down curves" where your mark is lowered to ensure the correct distribution.


> Because there are so many applicants that have good grades.

So train more doctors.


That would increase competition and thus depress wages for existing doctors, who are the ones who make the decisions here. I heard, from a medical school attendee, that she overheard some doctors discussing whether it would be a good idea to require a fifth year of medical school to become a general practitioner (luckily, they were like, "Eh... nah"). It did not seem like it bothered them that this would make it even harder for civilians to get medical care.


I thought lawmakers made the decisions. Silly me! :-D


Theoretically yes. But I think at least part of the decision they've made is to delegate a chunk of the decisionmaking to doctors' guilds. Which—on the one hand, they are experts of a sort, but on the other hand, they have an obvious conflict of interest.

https://en.wikipedia.org/wiki/American_Medical_Association#R...

Wow. 1997: https://www.baltimoresun.com/1997/03/01/ama-seeks-limit-on-r...

> “The United States is on the verge of a serious oversupply of physicians,” the AMA and five other medical groups said in a joint statement. “The current rate of physician supply — the number of physicians entering the work force each year — is clearly excessive.”

> The groups, representing a large segment of the medical establishment, proposed limits on the number of doctors who become residents each year.

> The number of medical residents, now 25,000, should be much lower, the groups said. While they did not endorse a specific number, they suggested that 18,700 might be appropriate.


I've read about that before. I personally am of the belief that Medicare funding for residency slots should be eliminated over time. Also freely allow the opening and expansion of medical schools and teaching hospitals. Over time things should settle into a comfortable equilibrium of enough doctors making decent wages for everyone to be treated at a reasonable cost.

But maybe that's a free market fantasy. Who knows.

Or the alternative. Government-owned everything healthcare - facilities, hospitals, med schools, doctor practices. Doctors only work for the government.

The current system is neither here nor there and is designed for maximum profit.


> Because there are so many applicants that have good grades.

Sounds like we need more spots for these people to go


> As per The Information, Meta employees used a total of 60.2 trillion AI tokens (!!) in 30 days. If this was charged at Anthropic’s API prices, it would cost $900M.

How are the investors not completely losing their minds at this kind of spending?


Because they're doing the exact same thing.


Yet.

Many ISPs are pushing v4 users into CGNAT so they're easier and cheaper to manage.

This is a big reason why Netflix and YouTube are on v6. To avoid the cost of service over v4.


I'm not sure that counting "How it's going?" as a productivity stat is the win you think it is.


When they say 'stuck...' and we fix a problem, I'd count that as a win.


Fun story - at Oxford they like to name buildings after important people. Dr Hoare was nominated to have a house named after him. This presented the university with a dilemma of having a literal `Hoare house` (pronounced whore).

I can't remember what Oxford did to resolve this, but I think they settled on `C.A.R. Hoare Residence`.


There's the Tony Hoare Room [1] in the Robert Hooke Building. We held our Reinforcement Learning reading group there.

[1] https://www.cs.ox.ac.uk/people/jennifer.watson/tonyhoare.htm...


>our Reinforcement Learning reading group there //

Anyone else, like me, imagining ML models embodied as Androids attending what amounts to a book club? (I can't quite shake the image of them being little CodeBullets with CRT monitors for heads either.)


The CB reference is appreciated, he isn't talked about enough here


I had countless lectures and classes there


Our Graphics Lab at University used to be in an old house opposite a fish and chip shop. The people at the fish and chip shop were suspicious of our lab as all they saw was young men (mostly) entering and leaving at all hours of the night. We really missed an opportunity to name it "Hoare House" after one of our favourite computer scientists.


I was awarded the CAR Hoare prize from university, which is marginally better than the hoare prize I suppose


Cowards.


Shame the university takes itself so seriously. The illustrative example of overloading would have been pertinent to his subject of expertise.


I mean, I like puns but they're a flash in the pan. Jokes get old after a while and you don't want to embed them in something fairly permanent like a building name.


"Surely you've all heard of the Hoare house on campus?" seems like a pretty timeless way to a) keep people from dozing off during that bit of lecture b) cause a whole bunch of people to remember who this guy was and what he did.


This particular word for the oldest profession goes back to Old English. I am fairly sure it would outlive the building.


If the problem is when the joke lives on amusing undergrads long after you've tired of it, that just makes it worse.


Wait until they hear about what Magpie Lane in Oxford used to be called.

https://en.wikipedia.org/wiki/Magpie_Lane,_Oxford


A historical bawdy pun is one of the most Oxfordian things I can think of. If we can incorporate a man in drag, we're in real business.


"Hoare House" would trigger millions of idiots, from rude little children to pontifying alpha ideologues. In perpetuity.

The University was correct in saying "nope" to the endless distractions, misery, and overhead of having to deal with that.


Imagine being a world-famous computer scientist and dying and one of the top threads in a discussion of your life is juvenile crap about how your name sounds like "whore".


Imagine being an adult human but not being able to extract a tiny chuckle from such a silly thing.


Well, I do have a rather special last name which makes me susceptible.


[flagged]


GP is well known, you really needn't guess if you're that fascinated.


Chill out, I doubt he would've minded and humorous anecdotes are great ways to grieve


We're actually not that far off.

Right now, liquid fuels have about 10x the energy density of batteries. Which absolutely kills it for anything outside of extreme short hop flights. But electric engines are about 3x more efficient than liquid fuel engines. So now we're only 3x-4x of a direct replacement.

That means we are not hugely far off. Boeing's next major plane won't run on batteries, but the one afterwards definitely will.


> So now we're only 3x-4x of a direct replacement.

The math leads out an important factor. As the liquid fuel burns, the airplane gets lighter. A lot lighter. Less weight => more range. More like 6x-8x.

Batteries don't get lighter when they discharge.


It's not that simple.

Batteries are inherently more aerodynamic, because they don't need to suck in oxygen for combustion, and because they need less cooling than an engine that heats itself up by constantly burning fuel. You can getvincredible gains just by improving motor efficiency - the difference between a 98%-efficient motor and a 99%-efficient motor is the latter requires half the cooling. That's more important than the ~1% increase in mileage.

Also, the batteries are static weight, which isn't as nightmarish as liquid fuel that wants to slosh around in the exact directions you want it not to. Static weight means that batteries can be potentially load-bearing structural parts (and in fact already are, in some EV cars).

The math leaves out a lot of important factors.


The fuel tanks are compartmentalized and have baffles to prevent sloshing. It's a solved problem.

Electric motors are not 98-99% efficient.

As you alluded to, battery weight is more than ICE weight. EVs are significantly heavier than ICEs.

I'm sure we can expect improvements along the lines you mentioned, but I seriously doubt it will be nearly enough.


Not to mention that jet planes routinely take off heavier than their max safe landing weight today too, relying on the weight reduction of consuming the fuel to return the plane to a safe landing weight again while enjoying the extra range afforded. This trick doesn't work well with batteries either.


There isn't any battery technology on the horizon that would lead to practical airliners.


You could do it with a ground effect plane for inland sea jaunts, like Seattle to Victoria. If you can float, then you don’t technically need a huge reserve like is normally needed.


> Boeing's next major plane won't run on batteries, but the one afterwards definitely will.

Jet engines work better. Boeing's next major plane will have jet engines, just like their previous major planes.

Synthetic, carbon neutral jet fuel will be the future for commercial jets.


Well, there's also burning regular fuel in a fuel cell, a FCEV. That doubles the efficiencies over ICE, so I guess that bumps it back up to 8x away?

Given the great energy densities and stability in transport of hydrocarbons, there's already some plants out there synthesising them directly from green sources, so that could be a solution if we don't manage to increase battery densities by another order of magnitude.


> there's already some plants out there synthesising them directly from green sources

I didn't realize that a "green" carbon atom is different from a regular carbon atom. They both result in CO2 when burned.


The problem isn't CO2 it's pulling carbon out of geological deposits. Thus the carbon atoms in synthetic fuel can be considered "green" provided an appropriate energy source was used.


I understand that, but it's a fallacious argument. It's still emitting the same amount of CO2 into the atmosphere.

You can also bury dead trees in a landfill.


You misunderstand the problem. The act of emitting CO2 into the atmosphere is not a problem.

Significantly increasing the CO2 concentration in the atmosphere is the problem. This happens when geological sources are used.

Unfortunately, burying dead trees in a landfill doesn't solve the problem because they decompose to methane which escapes. But you're right that geological CO2 production could be balanced by geologic CO2 sequestration, done properly.


The point is that emitting CO2 into the atmosphere was never the problem. Adding geological carbon back into the carbon cycle is the root cause of the entire thing.

You can certainly bury dead trees. I'm not sure how deep you'd need to go to accomplish long term (ie geological timeframe) capture. I somehow doubt the economics work out since what is all the carbon capture research even about given that we could just be dumping bamboo chips into landfills?


> I'm not sure how deep you'd need to go to accomplish long term (ie geological timeframe) capture.

Coal mines are sequestered trees.


Correct, but burying trees today isn't going to turn them into coal.

The big difference is that when the current coal layers were formed, bacteria to decompose trees hadn't evolved yet. There was a huge gap between trees forming and the ecosystem to break down trees forming, which led to a lot of trees dying and nothing being able to clean it up, which meant it was just left lying there until it was buried by soil and eventually turned into coal.

Try to bury a tree today, and nature will rapidly break it down. It won't form coal because there's nothing left to form coal.


But if the CO2 recently came from the atmosphere it's still a net zero impact though.

Like, take 5 units of carbon out of the atmosphere to create the fuel. Burn it and release 5 units of carbon to the atmosphere. What's the net increase again? (-5) + 5 = ?

FWIW I'm not saying these processes actually achieve this in reality. Just pointing out that it could be carbon neutral in the end.


> I didn't realize that a "green" carbon atom is different from a regular carbon atom.

Easy mistake to make, don't beat yourself up over it.

It's not the individual carbon atoms that carry the signature, it's the atoms in bulk that give the story ... eg: 6 x 10^23 carbon atoms

See: https://pmc.ncbi.nlm.nih.gov/articles/PMC7757245/


Its the time shift. Burning a plant releases CO2 and it is still considered to be carbon neutral.


Sorry, that's just verbal sleight of hand. There's no such thing as "green" CO2.


Yes there is. I used to fall for the same lie, but it's just not true. It's a question of system boundaries.

Green CO2 was recently (in geological terms) captured from the atmosphere into biomass, that's why its release is basically net zero.

Fossil CO2 hasn't been part of the atmosphere in eons (back in e.g. the Crustacean, the CO2 ratio was many times higher), so its release is additive.


So bury the trees. Or run them through the sawmill and build houses.


How do you justify exhaling then?


Well, you don't. Everything you do has a carbon footprint, you know, haven't you heard? Everything.


Have you always had difficulty with abstraction?


And, the two major byproducts of burning hydrocarbons are water and carbon dioxide.

Literally essential plant nutrients, essential for life.

Tangentially related, the 2022 Hunga Tonga–Hunga Haʻapai volcanic eruption ejected so much water vapour in to the upper atmosphere, it was estimated to have ongoing climate forcing effects for up to 10 years.

Water vapour is a stronger greenhouse gas than carbon dioxide.

And we heard precisely nothing about that in the media other than some science specific sources at the time and nothing on an ongoing basis.

From Wikipedia:

The underwater explosion also sent 146 million tons of water from the South Pacific Ocean into the stratosphere. The amount of water vapor ejected was 10 percent of the stratosphere's typical stock. It was enough to temporarily warm the surface of Earth. It is estimated that an excess of water vapour should remain for 5–10 years.

https://en.wikipedia.org/wiki/2022_Hunga_Tonga%E2%80%93Hunga...


Please, the media didn't report on this because natural disasters affecting the climate is not controllable by humans and thus doesn't warrant a global effort to address unless it's so large as to be species ending.

Global warming is not fake, there's tons and tons of evidence it is real and the weather is getting more and more extreme as humans continue to burn petrol.


Also some time after that other guy copied and pasted his canned Hunga remark into his big spreadsheet of climate denial comments the international community of climate scientists concluded that Hunga cooled the atmosphere, on balance.

"As a consequence of the negative TOA RF, the Hunga eruption is estimated to have decreased global surface air temperature by about 0.05 K during 2022-2023; due to larger interannual variability, this temperature change cannot be observed."

https://juser.fz-juelich.de/record/1049154/files/Hunga_APARC...


Thanks for linking that document, I’ll have a read.


Yes, and it doesn’t fit the narrative.

We should be moving towards being able to terraform Earth not because of anthropogenic climate forcing, but because one volcano or one space rock could render our atmosphere overnight rather uncomfortable.

You won’t find the Swedish Doom Goblin saying anything about that.

> burn petrol.

Well yeah, so making electricity unreliable and expensive, and the end-user’s problem (residential roof-top solar) is somehow supposed help?

Let’s ship all our raw minerals and move all our manufacturing overseas to counties that care less about environmental impacts and have dirtier electricity, then ship the final products back, all using the dirties bunker fuel there is.

How is that supposed to help?

I mean, I used to work for The Wilderness Society in South Australia, now I live in Tasmania and am a card carrying One Nation member.

Because I’m not a complete fucking idiot.

Wait till you learn about the nepotism going on with the proposed Bell Bay Windfarm and Cimitiere Plains Solar projects.

I’m all for sensible energy project development, but there’s only so much corruption I’m willing to sit back and watch.

With the amount of gas, coal, and uraniam Australia has, it should be a manufacturing powerhouse, and host a huge itinerant worker population with pathways to residency / citizenship, drawn from the handful of countries that built this country. And citizens could receive a monthly stipend as their share of the enormous wealth the country should be generating.

Japan resells our LNG at a profit. Our government is an embarrassment.


Natural resources are not required to make a country an economic powerhouse. See Japan, for example. Hong Kong, Taiwan, S Korea.

What's needed are free markets. Any country that wants to become a powerhouse has it within their grasp. Free markets.


And political will.

The Antipodes have such a problem with successful people we even invented a term for it.

https://en.wikipedia.org/wiki/Tall_poppy_syndrome

On the subject of free markets, Australia excels. We even let foreign entities extract and sell our LNG and pay no royalties and no tax.

https://australiainstitute.org.au/post/zero-royalties-charge...

Doesn’t get any freer than that!


Spain stripped S. America of its gold and silver, and neither Spain nor S. America benefited from it.


Doesn’t South America collectively produce more gold in one year than the Spanish usurped from them in their entire conquest period?

Gold production by country:

https://en.wikipedia.org/wiki/Lists_of_countries_by_mineral_...

In only the first half-century or so of the Spanish conquest of the Americas, over 100 tons of gold were extracted from the continent. - https://www.worldhistory.org/article/2045/the-gold-of-the-co...

Context is for kings though. In the context of what occurred when it occurred, you’re right.

For a while there, Australia was known as ‘the lucky country’ because despite the folly of politicians, and general fallibility of humans, we had wealth for toil.

Now we just give it away.


Hmmm. If we do simple extrapolation based on a battery density improvement rate of 5% a year, it takes about 30 years to get there. So it's not as crazy as it sounds - and it's also worth noting that there are incremental improvements in aerodynamics and materials so that gets you there faster...

However, as others have pointed out, the battery-powered plane doesn't get lighter as it burns fuel.


If we do simple extrapolation, a cellphone-sized battery will reach the 80kWh needed to power a car in as little as 180 years.

Expecting a 5% / year growth rate sustained for 30 years is very optimistic. It is far more likely that we'll hit some kind of diminishing return well before that.


More accurately, the calculation needs to factor in the fact that battery weight doesn’t decrease as charge is used.

Commercial aviation’s profitability hinges on being able to carry only as much fuel as strictly[1] required.

How can batteries compete with that constraint?

Also, commercial aviation aircraft aren’t time-restricted by refuelling requirements. How are batteries going to compete with that? Realistically, a busy airport would need something like a closely located gigawatt scale power plant with multi-gigawatt peaking capacity to recharge multiple 737 / A320 type aircraft simultaneously.

I don’t believe energy density parity with jet fuel is sufficient. My back of the neocortex estimate is that battery energy density would need to 10x jet fuel to be of much practical use in the case of narrow-body-and-up airliner usefulness.


You laid it out better than I. Thank you!


Thanks Walter!


An A320 can store 24k liters of fuel. Jet fuel stores 35 MJ/L. So, the plane carries 8.4E11 J of energy. If that was stored in a battery that had to be charged in an hour 0.23GW of electric power would be required.

So indeed, an airport serving dozens or hundreds of electric aircrafts a day will need obscene amounts of electric energy.


Jet engines are not 100% efficient.

Electric motors can be pretty close, 98% is realistic. Of course other parts of the system will lose energy, like conversion losses.

Of course that doesn't mean batteries are currently a viable replacement. One should still take efficiency into account in quick back of the envelope calculations.


Halve it to 0.11G then.

It makes no difference, we’d still need gigawatt scale electricity production, with some multiple of that at peak, just for a fairly unremarkable airport.


It's not that outrageous. Apparently, 90% of India is living on less than $10 per day (https://ourworldindata.org/grapher/share-living-with-less-th...)


I suspect most of these people are not software engineers with a computer?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: