Which is apparently manageable. Scott Manley isn’t an industry veteran, but he does know a lot about space engineering and science. Here’s his breakdown of the feasibility, and heat management is not really a major issue:
These satellites will be in orbits where they are always illuminated. That means constant temperatures, which means no thermal cycling and no reliability concerns.
When people say 'running it hot is bad for reliability', they mean 'running it hot and then brining it back to room temp from time to time will eventually kill it'.
It's in space which requires liquid cooling. No rocket is big enough so it has to be assembled on orbit. No liquid cooling terrestrial system is 100% leak free.
The existence of starlink proves that this is false. Look at most current pitches, they don’t talk about GW-class monsters anymore. There’s absolutely nothing stopping a 20-30kW satellite bus the size of starlink (or I guess up to 100kW? once starship is available - it’s all about payload fairing diameter) from hosting ~1 rack of compute and antennas. The economics may or may not make sense, we’ll have to see.
There’s very little research work needed to make this happen; it’s all about engineering some satellite buses and having them fly in close formation to get a “data center”. And this group of satellites in sun-synchronous orbit would relay to a comms constellation e.g. starlink itself) and operate as a global scale data center. The heat management and orbital mechanics are all straight forward really.
I've heard this before. A datacenter and a starlink sattelite are not in the same ballpark of power usage and heat dissipation needs. The are orders of magnitude off from each other.
The point is that you don’t need to put a whole datacenter into a single satellite. You can put a single rack per satellite and have different racks communicate via antennas, laser links, or perhaps even wires since they’ll be launched in groups of 10-50 anyway. You could also dock them to each other, but that’s not necessarily needed.
I don't understand what makes these "datacenters" if they're distributed across satellites with WAN-esque interconnect.
Are we overloading the term "datacenter"? Or is it not overloaded but somehow able to achieve datacenter-like speeds / (tail) latency even when distributed across satellites?
It's worth noting that GPUs have a much higher failure rate than traditional CPUs. Over 10x the failure rate due thermal stress. The amount of heat generated is very different. You can't really replace a GPU in a satellite (at least today?) which would place most of these satellites as space debris in a ~5 year horizon.
The current bottleneck is silicon. Every chip that is manufactured gets housed and powered. (It makes sense: the cost of compute is dominated by capex, the power costs are irrelevant, so they're ok paying a premium for power).
The space data center hypothesis relies on compute supply growing faster than power supply. (Both are bottlenecked on parts of the supply chain that will take ages to scale.)
Even if you believe that's the case, the point at which orbital data centers start making sense is incredibly sensitive to the exact growth rates.
The current bottleneck is not silicon. There is plenty of silicon locked up in previous gen GPUs that are no longer efficient enough to run relative to newer models. The bottleneck is the economics of owning the older GPU models - which is why all the GPU neoclouds are gonna go bust unless they can get customers to continue renting old GPUs.
The economics are vastly different when opex is near zero for these things
H100 rental prices are still as high as when the cards were brand new. The prices vastly exceed the power costs.
In a world where power or DC permits are the current bottleneck those H100s would be getting retired in favor of Blackwells. But they aren't. They are instead being locked in for years long contracts.
Because you'd need to trash the old GPUs in order to make room for new GPUs. Right now new GPUs get online mostly in new DCs. TSMC fab capacity is much more limiting than DC building and it will likely keep being the case. It's much easier to build a DC than a fab.
If silicon were relatively abundant and power/DC space scarce, you'd get an order of magnitude more bang for the Watt by replacing the H100s with newer GPUs.
But nobody is doing that. Blackwells are being installed as additional capacity, not Hopper replacements.
So it is pretty clear that silicon is the primary bottleneck.
"Space datacenter" -> overpriced starlink with some shitty edge compute -> "look guys, we built a space datacenter; earnings results to follow" -> number go up.
Pretty much everything has been "very dumb because of economics, logistics, serviceability and more". What kind of hacker are you to be on this site lol
SpaceX have presented on this and it's fairly straightforward and they already do it with starlink satellites, just at a larger scale. Sound like you are the uniformed one (or an EDS victim)
Starlink satellites don't generate the sort of heat a datacenter full of GPUs does. The ISS has enormous radiators, and it's only in space because it's a space station. Putting datacenters there is just goofy given the amount of available space on the ground.
All of that has been repeatedly addressed in anything that discusses it, if you care to try to understand. It has ~nothing to do with available space, the US grid can’t handle the current rate of expansion. It’s bad enough that apparently Span, the smart electrical panel company, is pitching a box full of Blackwells that’ll sit outside new construction homes and use all the headroom on residential 200A circuits. Space is starting to look reasonable.
Related, US readers should call their reps and ask them to support a successor to EPRA, the Energy Permitting Reform Act, the vast majority of the generation that’s waiting for approval is from clean energy sources. It nearly got over the line before the last Congress ended, and it’s one of the most impactful things we can do to combat climate change, combined with electrifying various carbon intensive activities.
Not quite, I'm rooting for the solar/battery microgrids down here, one of the startups I've invested in is working on those, but you don't really even need batteries for panels in a dawn-dusk sun synchronous orbit, which is a pretty huge advantage. Also, there aren't weeks where you have 1/4 the output because it's just cloudy all week, and your output isn't crushed during winter.
And the hardest part of my home solar install, by far, was the counterparties (inspectors, power company, and subcontractors). My understanding is that it's much worse when you're trying to get a grid scale install online, the interconnection queue is currently years long. This avoids most counterparties except the ones they're already routinely dealing with.
I've heard this before and these are not comparable at all. Starlink is missing a few digits in it's power usage and heat dissipation needs compared to a datacenter.
Scott Manley, I’d say one of the top pop space youtubers say otherwise. If anything it’s easier in space. On earth most complexity in datacenter is cooling. In space you just radiate it away.
And SpaceX already proven they can launch sort of datacenters 10k times by launching Starlink (up to 20KW of solar each IIRC).
FWIW Musk should support Bernie Sanders more. Putting moratoriums on datacenters would make space based ones far more economical.
He just mentions and walks through idea of having some amount of compute up there and what the heat rejection calculations roughly look like. He doesn't actually explore the economics of doing such a thing or discuss if it's actually worth doing.
It's not that you can't put a server in space, but the costs to do it almost assuredly don't make any sense. Because, if you can do it in space you can do it easier on the ground and save yourself millions in launch cost and extra complexity. Your cooling challenges are way cheaper and simpler in an atmosphere.
There's nothing much being in space really gets you, other than it makes it harder for a government to take your computers away. Not impossible, just harder.
Especially with everyone clamoring to have datacenters built in their backyards. There's absolutely no way there can be an advantage to figuring out compute outside Earth's magnetosphere, especially since none of the engineers as SpaceX would ever think of any long-term benefits of that.
My issue with this: if my intention is to never have these "co authored by <tool>" trailers in my commits, this is a sudden breaking change. What's worse, it is not immediately visible to the user. Now I could look like I use a not-company-approved AI. That's absolutely unacceptable, this could cost people their jobs. The "bug" (or "metrics boosting feature", as PMs call it?) that it claims all commits including ones never touched by Copilot are just icing on cake.
This is set up for the same fate as DNT in browsers. Collecting all the "do not track" env vars into a single "do_not_track.env" file, however, may not be a bad idea...
Advertisers chose to ignore DNT because they claimed Microsoft making DNT enabled by default took agency away from the user. In reality, they probably weren't going to honor it anyway.
There's an inherent conflict. No one _wants_ to be tracked, there is no direct benefit to being tracked and only downsides. And advertisers want to track you. So there was no way to respect the flag other than making it obscure so only a few dedicated people turned it on.
I think getting ads that are relevant to me is better than compete nonsense. BUT, I also don’t want to give advertisers any information to do it. (Maybe A.S.L. is ok to share?)
Yes. I know my two thoughts are in conflict, for the advertisers. Too bad for them. Figure it out.
In other words, advertisers wrangled out of something that could help people because they claimed it wasn't the true intent of people?
Advertisers are the scum of the Earth, as someone with ADHD who doesn't ever consent to my attention being stolen in that way. I really don't care what their opinion is, since they're intruding into my headspace without permission
To play devils advocate there is a direct benefit to being tracked, at least theoretically search and ads will more relevant to you. I get no one wants ads but you do see ads here and there. It would arguably be better for you if everyone of them was relevant than not. Similarly search or even LLM answers could be better if the preferences of the asker are known
No, in not making excuses for tracking and I do lots of stuff myself of avoid being tracked
I’m only responding to the false premise that there are no benefits. There are. You can just choose to believe they aren’t worth the cost. I believe they aren’t but I have friends who opt into all tracking and even register their presence with multiple apps. They believe they’ll make more positive connections
Exactly. From my experience: the times I've found an ad relevant and worth clicking is about one-to-a-gazillion. Maybe relevance is higher for others but that still doesn't necessarily translate to real value. (ie. your life was improved in any way)
Also, this all presumes the targeting actually works and the current sea ads for shoes I just bought disagree with that. It's all just spam.
Microsoft is too sophisticated to plead ignorance; they are responsible for that outcome and I think we can assume they knowningly chose it. (Though now Microsoft browsers are such a small portion of the market that it doesn't matter.)
The biggest failure of DNT was browser makers - including Mozilla - removing it. It has zero performance impact (1 bit?) or development cost. As long as it was out there, when there was momentum against tracking, advocates had evidence of both demand for privacy and of trackers ignoring user wishes.
> advocates had evidence of both demand for privacy and of trackers ignoring user wishes.
This evidence both still exists and is also completely useless for anything. The more important consideration, by far, is that the DNT flag was actively harmful to users in the real world because, if it was acknowledged at all, it was used maliciously to help fingerprint and track users. There is no reason for browsers to continue providing to their users a toggle that not only misleads them about what will happen with the setting enabled, but actively contributes to the opposite outcome because we live in a world where being evil is the norm.
Lately, I've come across websites that instead of a cookie banner display a banner that states they recognize and honor my wish to not be tracked. Whether that really do or not is something I did not spend time looking into. The first time I saw it I thought it was a fluke, and then it happened a few more times with in a short time period. Couldn't tell you what sites they were though as it was just something from search results.
::shrug:: I set it a long time ago and never looked back. I never looked into it being deprecated, but I knew that pretty much everyone ignored it for reasons. But by these banners, I'm guessing it still lives on as a setting.
Advertisers ignored it because MS decided to turn it into opt-in instead of opt-out, and advertisers very much hate opt-in. They'd much rather require the most permissive defaults, and put every barrier they can in front of opting out.
Love it. This is an annoying problem and likely the actual solution than asking folks to use a universal one. I'll put something together as a starting point.
reply