> I administrate are contractually obligated to be so isolated
Yeah, I've seen those contracts. They just reference a SeCuRiTy doc that's 20+ years old, and has never been re-evaluated. Things are secure because they follow the doc, not because they have actually evaluated the reasonable attack space.
I've fighting customers for years on their ideas of proper TLS usage and it's always the same thing. They've got a security doc that never changes and has never evaluated any of the trade-offs. Almost to the point that the people who wrote them choose things that increase downtime and KTLO work without helping security.
Ah-yup. The equivalent in my world is contracts that insist we make our employees rotate their passwords every 2 months or whatever, which was a popular (but still dumb) idea 20 years ago and is strongly recommended against today.
On week one of my current job, I turned that off for the whole company. Here's the citation you can give your security department to show them why they're doing it wrong.
NIST Special Publication 800-63B, the July 2025 version, section 3.1.1.2, says:
"Verifiers and CSPs SHALL NOT require subscribers to change passwords periodically. However, verifiers SHALL force a change if there is evidence that the authenticator has been compromised."
The previous version from June 2017, section 5.1.1.2, says:
"Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator."
So 9 years ago, NIST said to stop requiring that. Last year, they clarified that to say, no, really, freaking stop it. Any company still making people do that today is 9 years out of date, and 1 year out of compliance.
You are right that a 32 bit ipv4 stack can not understand a 64 bit packet format. The thing I am trying to get at is not native compatibility, it is operational compatibility via translation. I know, I know, you will probably say that is what ipv6 bridges do.
But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space. This would allow translation at network boundaries and let old systems continue to operate unchanged. Then the routers and systems would be upgraded incrementally. I think that is why it would have been upgraded more quickly.
I remember reading about that a long time ago. I wonder why it never really caught on?
I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.
I don't see how it matters we forced people into ipv6 as well. Who cares. It's more about the difference in mental models that prevented adoption especially among those who run the services that are on the internet.
I went and re-read point 3B. I agree that some hypothetical ipv42 faces a translation problem.
But it does not follow that address design is irrelevant. The structure of the address space directly determines whether translation can be stateless and alogrithmic.
In a hypothetical ipv42 design that preserves a deterministic embedding relationship between old and new addresses, translation at the edges could be largely stateless and mechanically reversible, to reduce coordiation overhead between operators and it makes reachability more predictable.
In our world ipv6, the transition seems to require a mix of dual stack, nat64, dns64, tunneling aproaches. The mapping between ipv4 and ipv6 is not uniformly deterministic across all deployment contexts.
Also, there is just a human factor. The mental gymnastics that go on. The perception of what is the way forward? With ipv6, it feels like everyone has to go get their ipv6 stack in order. With a hypothetical ipv42, where the ISPs and backbone providers can throw in the translation layers, it feels like, to me, they would have gotten on board much more quickly. Yeah, I know, it is just a feeling.
I agree with you about the embedded addresses, and I don't understand why the space was moved to all zeros to a bunch of other mappings.
but the utility of this isn't that high. we already know how to handle 4-4 and 6-6 traffic just fine. but if a 4 host wants to talk to a 6 host, it just doesn't have the extra bits in order to describe it, so this just doesn't facilitate 4-6 endpoint communication at all. this is true even you substitute v6 with any other layer 3 with a larger address space.
where it does help is in a unified routing backbone, that would allow v4 prefixes to be announced in the v6 routing system. which is arguably useful.
The embedding I believe you are referring to is not a part of the global routing model. (maybe I am wrong?) What I am describing is making that kind of declaration central to the system in a deterministic, network wide mapping of ipv4 to the larger ipv6 space. The translation in ipv6 ended up being handled by a mix of mechanisms after the fact, rather than a single, uniform mapping model that tied directly to the address structure. I think part of the problem is they did not put that front and center, at the beginning, when doing the initial specification.
Indeed doing it this way would keep the fragmentation, or at least delay fixing it. That's what these articles always overlook, the goal of ipv6 wasn't to just add more bits, it was also to defrag the routes.
I think instead of 1.1.1.1::, you could do 4:1.1.1.1::, wait for v4 to be gone, then start building new topologies in the other /8s. Not sure how hard that is, but it seems easier than what they're trying to do now.
Would it help at all? You can't just send IPv6 packets down the equivalent IPv4 path because that bext-hop router probably xoesn't understand IPv6 packets. In fact there could be no IPv6 path at all between you and the destination, so knowing where they are still wouldn't help you forward packets. If it understood them, it would have given you an IPv6 route anyway. Updating BGP to support IPv6 routes wasn't an actual problem.
There are lots of services I can't send v6 to, not because some router in the middle only understands v4 but because the service operator decided not to deal with v6.
So the idea is to surreptitiously install software on the service operator's machines that they can't disable?
It's already a bit like that, but they can and do disable it. You can see the other comments in this thread: many people disable IPv6 upon any sign of a networking problem.
No, the idea is you can turn v6 on/off, but doing so only changes the packet format and nothing else at first. There's no separate place to configure v6-specific settings because there are none. You use the same address, routes, DHCP, NAT, DNS, etc as v4, but you're limited to 32-bit addrs at first. The point is to just get people off v4.
Once v6 has reached enough adoption, you can turn off v4. Those who want to keep the addrs from v4 can, except now they get way more addresses under those too. Others can start building a clean new topology under the other prefixes without worrying about compatibility.
I don't see why anyone would change all the bits you actually need to change for some nebulous future gains. Still have to deal with new sockets and new routing decisions at least. To not really gain much from new features.
To me it looks like something that would have gained nearly no actual adoption outside some toy examples. Later you will need to anyway get new DNS, DHCP(or alternative) and so on.
That's a legit concern. If that's not interesting enough to the kind of user that wants all-new v6, instead start from today where some users are on the new v6 network, and say they added the 4:: prefix as a way to pick up the kind of user that doesn't want to change much. They'd still be compatible eventually. Though the reason I was thinking 4:: from the start would've been attractive enough is, a lot of people did use 6to4 and other halfway measures despite having no immediate gain.
Today's DNS6 DHCP6 etc are totally incompatible with v4. 4:: buys backwards-compatibility. Each can be updated to support longer addrs without caring whether you use it with v4 or v6.
> At least at first, you wouldn't, you'd embed all of them. Cloudflare has 1.1.1.1, so they get 1.1.1.1:: too.
Everyone with an IPv4 address automatically got an IPv6 allocation:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
What does it mean to have an /48? Well, a IPv6 subnet is /64, so that's 16 bits for subnets. In IPv4 land, if you take a subnet to be /24, an allocation with 16 bits worth of subnets would be a /8.
So basically, with 6to4, every person with an IPv4 address got the equilvalent of a Class A in IPv6.
This is a fake argument. Noone is arguing for backwards compatibility.
But there was also no necessity to demand reshaping networks and changing address assignment in a way that made migration extremely work intensive and hard to deploy in parallel.
I wouldn't try to reinvent DHCP, kept NAT and generally attempted to keep the overall shape of a v6 network the same as v4 networks to ease transition of large deployments.
Ipv6 now has most of that - after years of resistance - which results in a mixed mess of "several ways to do it" approaches spiced with clients and equipment supporting a random set of them.
Yes, but CGNAT is an inherently stateful system and as a result will always be more expensive to operate per packet than a stateless router. The reason we are seeing steady (if slow) growth in native IPv6 is because the workarounds for IPv4 exhaustion cost money, and eventually upgrading equipment and putting pressure on website operators to support IPv6 becomes cheaper than growing CGNAT capacity.
Because there are so many applicants that have good grades.
A more cynical view is that the governing boards want a way to pick and choose who they let in. So they create "holistic" application systems to get "360 degree view of the candidate".
No matter how many have good grades, you can always pick the top n by grades—unless there's a ceiling that the top m > n have all hit. Which, if you're talking about "grades" as in GPA, is plausible.
MCAT seems more relevant, though. According to Claude: "Roughly 0.1% or fewer of test-takers score a perfect 528 in any given year — typically only a few dozen individuals out of the ~120,000 or so who sit for the exam annually." So it should work fairly well for them to sort by MCAT and take however many they have (or expect to have) room for.
I think OP's point was that the governing boards don't want the people with the top n grades. They want certain people, and by making the admissions criteria fuzzy, they can pick and choose those certain people and then say "well, our admission criteria is subjective," and "we are looking for 'well rounded people," and all kinds of other vague weasely ways to let them legitimately shape the student body in the way they want.
One of my roommates who was premed had a "hot car" poster as a motivational study aid. After a short term as a candy striper at a local hospital, he changed majors. The system works! ;-)
At a certain point, grades become arbitrary and won’t necessarily select for the best candidates. Obviously the current system doesn’t, either.
The actual solution is to increase the number of slots for training doctors to match the huge number of qualified applicants. It makes even more sense given that there is a shortage of doctors and health care costs are astronomical.
I want a doctor who was a strong student with diverse experiences, lots of soft skills and can handle the entire psychological spectrum of being a doctor, not the doctor who was solely the best at exams.
There are all kinds of doctors though? The ones who don't have soft skills or diverse experiences can go into pathology or other fields that don't involve as much patient interaction. Why lose out on their gifts altogether if they're genuinely interested in medicine.
> No matter how many have good grades, you can always pick the top n by grades. Which, if you're talking about "grades" as in GPA, is plausible.
I live in Ontario and we're there. 40% of Waterloo students had above a 95% average in high school. The average GPA to get into UofT med school is 3.94/4.00 GPA.
What has happened as a result is students killing themselves and each other. If you fail one test in any course, you cannot move to the next level.
So, if you go on the UofT subreddit there's endless stories of pre-med students sabotaging each other. Faking friendliness, destroying notes, etc etc. This is arguably rational because the pool is small and there's little to gain by studying harder if you already have a perfect GPA.
You don't want this type of person as a doctor. They will sabotage others because that is how they got ahead in the past. In a medical environment that kills people.
> This is arguably rational because the pool is small and there's little to gain by studying harder if you already have a perfect GPA.
So there is a low ceiling, and if they instead used MCAT or something with a higher ceiling (where, apparently, the number of perfect scores is about 50 per year—in America, presumably lower in Canada due to population size), then studying harder would benefit them. That seems like a much better outlet for competitive urges.
But also, how small is the pool of qualified applicants? If there were something like "they're going to take n people from your school, at which there are 30 plausible candidates", then sabotaging one might conceivably be worthwhile. But if the pool is—well, Google says 3,000 medical students get accepted each year in Canada (and the qualified applicant pool is presumably at least somewhat larger), and sabotaging one person is extremely unlikely to help you personally. (This is one case where it's good that the expected-value "benefits", of sabotaging person X, are widely distributed among thousands of medical candidates, and thus it's a "free-rider problem" where no individual candidate has a strong motivation to do the work.)
Is there some multi-stage thing where they pick 10 people from each high school, or 30 from a town, or something? Or is there major grading on a curve, or a big benefit for being the top person in your classroom of 15? That seems like how you would get real incentives for this backstabbing behavior. Otherwise, I can't see how it's rational (even to a complete sociopath), and would have to chalk it up to individual miscreants and possibly some kind of culture that encourages it in other ways.
That would increase competition and thus depress wages for existing doctors, who are the ones who make the decisions here. I heard, from a medical school attendee, that she overheard some doctors discussing whether it would be a good idea to require a fifth year of medical school to become a general practitioner (luckily, they were like, "Eh... nah"). It did not seem like it bothered them that this would make it even harder for civilians to get medical care.
Theoretically yes. But I think at least part of the decision they've made is to delegate a chunk of the decisionmaking to doctors' guilds. Which—on the one hand, they are experts of a sort, but on the other hand, they have an obvious conflict of interest.
> “The United States is on the verge of a serious oversupply of physicians,” the AMA and five other medical groups said in a joint statement. “The current rate of physician supply — the number of physicians entering the work force each year — is clearly excessive.”
> The groups, representing a large segment of the medical establishment, proposed limits on the number of doctors who become residents each year.
> The number of medical residents, now 25,000, should be much lower, the groups said. While they did not endorse a specific number, they suggested that 18,700 might be appropriate.
I've read about that before. I personally am of the belief that Medicare funding for residency slots should be eliminated over time. Also freely allow the opening and expansion of medical schools and teaching hospitals. Over time things should settle into a comfortable equilibrium of enough doctors making decent wages for everyone to be treated at a reasonable cost.
But maybe that's a free market fantasy. Who knows.
Or the alternative. Government-owned everything healthcare - facilities, hospitals, med schools, doctor practices. Doctors only work for the government.
The current system is neither here nor there and is designed for maximum profit.
> As per The Information, Meta employees used a total of 60.2 trillion AI tokens (!!) in 30 days. If this was charged at Anthropic’s API prices, it would cost $900M.
How are the investors not completely losing their minds at this kind of spending?
Fun story - at Oxford they like to name buildings after important people. Dr Hoare was nominated to have a house named after him. This presented the university with a dilemma of having a literal `Hoare house` (pronounced whore).
I can't remember what Oxford did to resolve this, but I think they settled on `C.A.R. Hoare Residence`.
>our Reinforcement Learning reading group there //
Anyone else, like me, imagining ML models embodied as Androids attending what amounts to a book club? (I can't quite shake the image of them being little CodeBullets with CRT monitors for heads either.)
Our Graphics Lab at University used to be in an old house opposite a fish and chip shop. The people at the fish and chip shop were suspicious of our lab as all they saw was young men (mostly) entering and leaving at all hours of the night. We really missed an opportunity to name it "Hoare House" after one of our favourite computer scientists.
I mean, I like puns but they're a flash in the pan. Jokes get old after a while and you don't want to embed them in something fairly permanent like a building name.
"Surely you've all heard of the Hoare house on campus?" seems like a pretty timeless way to a) keep people from dozing off during that bit of lecture b) cause a whole bunch of people to remember who this guy was and what he did.
Imagine being a world-famous computer scientist and dying and one of the top threads in a discussion of your life is juvenile crap about how your name sounds like "whore".
Right now, liquid fuels have about 10x the energy density of batteries. Which absolutely kills it for anything outside of extreme short hop flights. But electric engines are about 3x more efficient than liquid fuel engines. So now we're only 3x-4x of a direct replacement.
That means we are not hugely far off. Boeing's next major plane won't run on batteries, but the one afterwards definitely will.
> So now we're only 3x-4x of a direct replacement.
The math leads out an important factor. As the liquid fuel burns, the airplane gets lighter. A lot lighter. Less weight => more range. More like 6x-8x.
Batteries are inherently more aerodynamic, because they don't need to suck in oxygen for combustion, and because they need less cooling than an engine that heats itself up by constantly burning fuel. You can getvincredible gains just by improving motor efficiency - the difference between a 98%-efficient motor and a 99%-efficient motor is the latter requires half the cooling. That's more important than the ~1% increase in mileage.
Also, the batteries are static weight, which isn't as nightmarish as liquid fuel that wants to slosh around in the exact directions you want it not to. Static weight means that batteries can be potentially load-bearing structural parts (and in fact already are, in some EV cars).
Not to mention that jet planes routinely take off heavier than their max safe landing weight today too, relying on the weight reduction of consuming the fuel to return the plane to a safe landing weight again while enjoying the extra range afforded. This trick doesn't work well with batteries either.
You could do it with a ground effect plane for inland sea jaunts, like Seattle to Victoria. If you can float, then you don’t technically need a huge reserve like is normally needed.
Well, there's also burning regular fuel in a fuel cell, a FCEV. That doubles the efficiencies over ICE, so I guess that bumps it back up to 8x away?
Given the great energy densities and stability in transport of hydrocarbons, there's already some plants out there synthesising them directly from green sources, so that could be a solution if we don't manage to increase battery densities by another order of magnitude.
The problem isn't CO2 it's pulling carbon out of geological deposits. Thus the carbon atoms in synthetic fuel can be considered "green" provided an appropriate energy source was used.
You misunderstand the problem. The act of emitting CO2 into the atmosphere is not a problem.
Significantly increasing the CO2 concentration in the atmosphere is the problem. This happens when geological sources are used.
Unfortunately, burying dead trees in a landfill doesn't solve the problem because they decompose to methane which escapes. But you're right that geological CO2 production could be balanced by geologic CO2 sequestration, done properly.
The point is that emitting CO2 into the atmosphere was never the problem. Adding geological carbon back into the carbon cycle is the root cause of the entire thing.
You can certainly bury dead trees. I'm not sure how deep you'd need to go to accomplish long term (ie geological timeframe) capture. I somehow doubt the economics work out since what is all the carbon capture research even about given that we could just be dumping bamboo chips into landfills?
Correct, but burying trees today isn't going to turn them into coal.
The big difference is that when the current coal layers were formed, bacteria to decompose trees hadn't evolved yet. There was a huge gap between trees forming and the ecosystem to break down trees forming, which led to a lot of trees dying and nothing being able to clean it up, which meant it was just left lying there until it was buried by soil and eventually turned into coal.
Try to bury a tree today, and nature will rapidly break it down. It won't form coal because there's nothing left to form coal.
But if the CO2 recently came from the atmosphere it's still a net zero impact though.
Like, take 5 units of carbon out of the atmosphere to create the fuel. Burn it and release 5 units of carbon to the atmosphere. What's the net increase again? (-5) + 5 = ?
FWIW I'm not saying these processes actually achieve this in reality. Just pointing out that it could be carbon neutral in the end.
And, the two major byproducts of burning hydrocarbons are water and carbon dioxide.
Literally essential plant nutrients, essential for life.
Tangentially related, the 2022 Hunga Tonga–Hunga Haʻapai volcanic eruption ejected so much water vapour in to the upper atmosphere, it was estimated to have ongoing climate forcing effects for up to 10 years.
Water vapour is a stronger greenhouse gas than carbon dioxide.
And we heard precisely nothing about that in the media other than some science specific sources at the time and nothing on an ongoing basis.
From Wikipedia:
The underwater explosion also sent 146 million tons of water from the South Pacific Ocean into the stratosphere. The amount of water vapor ejected was 10 percent of the stratosphere's typical stock. It was enough to temporarily warm the surface of Earth. It is estimated that an excess of water vapour should remain for 5–10 years.
Please, the media didn't report on this because natural disasters affecting the climate is not controllable by humans and thus doesn't warrant a global effort to address unless it's so large as to be species ending.
Global warming is not fake, there's tons and tons of evidence it is real and the weather is getting more and more extreme as humans continue to burn petrol.
Also some time after that other guy copied and pasted his canned Hunga remark into his big spreadsheet of climate denial comments the international community of climate scientists concluded that Hunga cooled the atmosphere, on balance.
"As a consequence of the negative TOA RF, the Hunga eruption is estimated to have decreased global surface air temperature by about 0.05 K during 2022-2023; due to larger interannual variability, this temperature change cannot be observed."
We should be moving towards being able to terraform Earth not because of anthropogenic climate forcing, but because one volcano or one space rock could render our atmosphere overnight rather uncomfortable.
You won’t find the Swedish Doom Goblin saying anything about that.
> burn petrol.
Well yeah, so making electricity unreliable and expensive, and the end-user’s problem (residential roof-top solar) is somehow supposed help?
Let’s ship all our raw minerals and move all our manufacturing overseas to counties that care less about environmental impacts and have dirtier electricity, then ship the final products back, all using the dirties bunker fuel there is.
How is that supposed to help?
I mean, I used to work for The Wilderness Society in South Australia, now I live in Tasmania and am a card carrying One Nation member.
Because I’m not a complete fucking idiot.
Wait till you learn about the nepotism going on with the proposed Bell Bay Windfarm and Cimitiere Plains Solar projects.
I’m all for sensible energy project development, but there’s only so much corruption I’m willing to sit back and watch.
With the amount of gas, coal, and uraniam Australia has, it should be a manufacturing powerhouse, and host a huge itinerant worker population with pathways to residency / citizenship, drawn from the handful of countries that built this country. And citizens could receive a monthly stipend as their share of the enormous wealth the country should be generating.
Japan resells our LNG at a profit. Our government is an embarrassment.
Context is for kings though. In the context of what occurred when it occurred, you’re right.
For a while there, Australia was known as ‘the lucky country’ because despite the folly of politicians, and general fallibility of humans, we had wealth for toil.
Hmmm. If we do simple extrapolation based on a battery density improvement rate of 5% a year, it takes about 30 years to get there. So it's not as crazy as it sounds - and it's also worth noting that there are incremental improvements in aerodynamics and materials so that gets you there faster...
However, as others have pointed out, the battery-powered plane doesn't get lighter as it burns fuel.
If we do simple extrapolation, a cellphone-sized battery will reach the 80kWh needed to power a car in as little as 180 years.
Expecting a 5% / year growth rate sustained for 30 years is very optimistic. It is far more likely that we'll hit some kind of diminishing return well before that.
More accurately, the calculation needs to factor in the fact that battery weight doesn’t decrease as charge is used.
Commercial aviation’s profitability hinges on being able to carry only as much fuel as strictly[1] required.
How can batteries compete with that constraint?
Also, commercial aviation aircraft aren’t time-restricted by refuelling requirements. How are batteries going to compete with that? Realistically, a busy airport would need something like a closely located gigawatt scale power plant with multi-gigawatt peaking capacity to recharge multiple 737 / A320 type aircraft simultaneously.
I don’t believe energy density parity with jet fuel is sufficient. My back of the neocortex estimate is that battery energy density would need to 10x jet fuel to be of much practical use in the case of narrow-body-and-up airliner usefulness.
An A320 can store 24k liters of fuel. Jet fuel stores 35 MJ/L. So, the plane carries 8.4E11 J of energy. If that was stored in a battery that had to be charged in an hour 0.23GW of electric power would be required.
So indeed, an airport serving dozens or hundreds of electric aircrafts a day will need obscene amounts of electric energy.
Electric motors can be pretty close, 98% is realistic. Of course other parts of the system will lose energy, like conversion losses.
Of course that doesn't mean batteries are currently a viable replacement. One should still take efficiency into account in quick back of the envelope calculations.
It makes no difference, we’d still need gigawatt scale electricity production, with some multiple of that at peak, just for a fairly unremarkable airport.
If it's going down at 1 day per week then it's not so bad. If it's closer to 0.75 days per day, that's much more serious.
reply