The largest commercial classes of multi-domain therapeutic proteins include the crispr (and similar) that drive gene therapies, and the chimeric antigen receptors (and similar) that drive cell therapies.
But lead optimization there look different than this page’s efforts.
As problems go, radiation and cooling seem to have relatively low dimensionality compared to the other problems. It seems to be mostly a question of optimizing within the dimensions of dissipation / structure / deployment / service / cost / weight. When all is said and done, the cooling solution will end up being a module that can deal with some power dissipation, cost X amount, weight Y amount, have structural interface Z. This seems like something a relatively low number of engineers can iterate on largely isolated from other concerns. SpaceX does have 5000+ of them.
Comparing this to scaling the production of compute where they try to work outside the bounds of ASML (~40k employees) and TSMC (~80k+ employees), and where there is a huge number of degrees of freedom in many, many layers of the stack that have complicated interactions.
With radiation and cooling, SpaceX also has plenty of experience with both already given that they've had to solve this on existing satellites. Overall, Terafab just seems like a far harder challenge, and where I'd be more wary on timelines.
Radiators are raised because it's a known constraint and we know that Stefan Boltzmann implies a lot of radiator mass to be launched even at 100% cooling efficiency and there are also theoretical limits to launch efficiency which Starship is rapidly approaching.
Nobody is saying orbital datacentres can't be cooled, they're saying people arguing launching the mass of the required radiators into space is a better, more cost-effective cooling solution than pumping local water because "space is cold" are talking nonsense. Potential solutions don't look like trying to get 5000 engineers to invent radiators which defy the laws of physics, they probably look like amortising the costs over multiple decades of operation and ideally assembling the radiator portion of the datacentre from mass that's already in orbit, but that's not a near term profit pitch....
I read the comment that I replied to as these challenges being a large prohibitor to this development, and I pointed out that these seem like challenges that can be dealt with mostly in isolation from other challenges and in particular not require a large number of engineers to deal with.
Of course the major exercise becomes about total cost efficiency, but I think a large attraction is that once you've solved space deployment sufficiently, you don't need to keep dealing with local circumstances and power production adaptations to every new site you're dealing with on Earth, as it's more about producing a set of modules you can keep launching without individual adaptation - not about "space being cold".
The point is that they're absolutely not in isolation from other challenges because designing something to radiate heat at maximum possible radiative cooling efficiency is not considered to be a problem, solving the unit economics of launching the required radiator tonnage and burning 100 tonnes of rocket fuel to per tonne launched that's the problem. Cutting edge stuff like in-space refuelling and modular in-space reassembly and patient capital are crucial to making those work because the radiators aren't getting beyond 100% radiative cooling efficiency however well designed they are.
Optimizing for local circumstances is a benefit to doing things on earth: if having a production line and the ability to plug into wherever energy happens to be cheapest was better we'd all be sticking inference chips in shipping containers and not worrying about HVACs being relatively inefficient at cooling.
> The point is that they're absolutely not in isolation from other challenges because designing something to radiate heat at maximum possible radiative cooling efficiency is not considered to be a problem, solving the unit economics of launching the required radiators tonnage and burning 100 tonnes of rocket fuel to per tonne launched that's the problem.
I was pointing out relative coupling, not absolute coupling. The coupling between the different design decisions involved in Terafab or Starship seems far greater as there are so many design levels to unite jointly - while figuring out the structural and thermal design of these satellites appears to be something that to a greater degree can be resolved with less design constrained coupling - i.e. making it more feasible to figure out with a lower number of people.
> Optimizing for local circumstances is a benefit to doing things on earth: if having a production line and the ability to plug into wherever energy happens to be cheapest was better we'd all be sticking inference chips in shipping containers and not worrying about HVACs being relatively inefficient at cooling.
I did not reference energy cost directly. In many countries there are year-long lines for data centers to even be allowed to connect to the grid, which is why many also resort to local gas turbine power plants etc. Having a cost effective (the unknown is if/when this becomes possible) method of deploying large units of compute without dealing with this power access issue - zoning issues - local policies etc - appears to be one of the large attractions to this endeavor, in addition to being able to avoid longer term scaling issues. Inference sticks are not cost effective at scale now and that does not seem to be on the horizon. Space based compute however seems to be a more open question depending on your timeline.
> I was pointing out relative coupling, not absolute coupling. The coupling between the different design decisions involved in Terafab or Starship seems far greater as there are so many design levels to unite jointly - while figuring out the structural and thermal design of these satellites appears to be something that to a greater degree can be resolved with less design constrained coupling - i.e. making it more feasible to figure out with a lower number of people.
Sure, but you're missing the point which people familiar with spacecraft systems engineering are actually making, which isn't "radiators are a problem because they're hard to design" but that "radiators are a problem because it's hard to design everything else to offset their relatively large mass budget, and thus every other aspect of designing and operating an ODC as a profitable alternative to terrestrial ODCs is coupled to the theoretical limits to how low the radiator launch mass can be". The number of engineers required to design radiators themselves is totally irrelevant, but you can't isolate the radiators' required launch mass from the overall concept of operations and operating economics.
One issue with this argument is that there are very few engineers that have had the opportunity to design satellites that are; this large, are designed for mass manufacturing, rapid iteration, failure allowance, and with access to a reusable launch vehicle with the capability of Starship (where it's also unknown what launch mass capability they will end up reaching).
The satellites built by SpaceX so far, and their engines, are quite unlike most previous space engineering due to these reasons. Given the undeniable success they've had in building Starlink, with each version growing considerable in size, I just don't see which engineers would be able to fully rule out the math that SpaceX might be working on here, exactly because there are so many parts of the total equation and where SpaceX are moving outside the previous design envelopes in many dimensions.
Of course I'm personally not convinced or able to know whether this is economically sensible - I just believe it's very difficult to fully rule out given the track record of SpaceX - and given that there doesn't appear to be any singular insurmountable thing that needs to be figured out here. Hence why I said in my original post that this is why I'm excited to see the design space explored.
Isn't the question more an economic one: Is it cheaper to put some solar cells into the desert and to buy some batteries, or to launch things into space (plus the premium for radiation hardening and ensuring it survives long enough because you cannot service it).
Given the current trajectory of battery and solar prices I just don't that space-based systems are cheaper in any way.
Of course there is a long-term aspect should we climb the ladder in the Kardashev scale: Once we used all solar radiation reaching earth we must move to space to grow. But that is decades if not centuries away.
LLMs get ridiculous with elixir, especially with the repl, runtime, and ability to hot reload / directly test functions. It's really surprising to me it hasn't caught on more but I guess you have to see it to believe it.
built my startup in elixir and can concur. elixir has a relatively consistent syntax that makes for a pretty good target for llms.
In my opinion, the only thing holding elixir back as an llm deliverable is that there's not as much training data for llms to work with.
Of course if we had a new AI that could be trained on a minimum of existing training data, common lisp would absolutely beat out everything else. everything you mentioned about elixir (repl, runtime, and ability to hot reload / directly test functions) are possible and were invented in lisp with an AST instead of a syntactic language as the ultimate build artifact. CL lets you recover from exceptions and rewind the stack before reloading your fixes and continuing. I can't even fathom the workloads an LLM could conceive of working with that.
hm, last i checked counters were fully implemented as atomics, it seems now theres another internal bif for "write concurrency", is this new or was it always like this and did i just miss it?
It'd take more than that to match rust's borrow checker. Rust's borrow checker tracks lifetimes, and sometimes needs annotations in code to help it understand what you're actually trying to do. I suppose you could work around that by adding lifetime annotations in zig comments. Then you've have a language that's a lot like rust, but without an ecosystem of borrowck-safe libraries. And with worse ergonomics (rust knows when it can Drop). And rust can put noalias everywhere in emitted code. And you'd probably have worse error messages than the rust compiler emits.
Its an interesting idea. But if you want static memory safety in a low level systems language, its probably much easier to just use rust.
> I suppose you could work around that by adding lifetime annotations in zig comments.
you can make a no-op function that gets compiled out but survives AIR
> rust knows when it can Drop.
and its possible to cause problems if you aren't aware where rust picks to dropp.
> And rust can put noalias everywhere in emitted code.
zig has noalias and it should be posssible to do alias tracking as a refinement.
> But if you want static memory safety in a low level systems language, its probably much easier to just use rust.
don't use that attitude to suck oxygen out of the air. rust comes with its own baggage, so "just using rust because its the only choice" keeps you in a local minimum.
> and its possible to cause problems if you aren't aware where rust picks to drop.
Can you give some examples? I've never ran into problems due to this.
> don't use that attitude to suck oxygen out of the air. rust comes with its own baggage
Yeah, that's a totally fair argument. One nice aspect of the approach you're proposing is it'd give you the opportunity to explore more of the borrow checker design space. I'm convinced there's a giant forest of different ways we could do compile time memory safety. Rust has gone down one particular road in that forest. But there's probably loads of other options that nobody has tried yet. Some of them will probably be better than rust - but nobody has thought them through yet.
I wish you luck in your project! If you land somewhere interesting, I hope you write it up.
> If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.
Yeah, I've heard of people being surprised that when they make massive collections of Box'ed entries, then get surprised that it takes a long time to Drop the whole thing. But this would be the same in C or Zig too. Malloc and free are really complex functions. Reducing heap allocations is an essential tool for optimisation.
The solution to this "unexpected performance regression" in rust is the same as it is in C, C++ and Zig: Stop heap allocating so much. Use primitive types, SSO types (SmartString and friends in rust) or memory arenas. Drop isn't the problem.
yeah, IMO generally explicit is better. It's hard to take something implicit and increase the visibility (I'm aware there are tools to show you lifetimes in rust). But another option is to statically analyze the code (or the IR) and have something else check that you aren't leaking.
True. But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope. Ownership semantics mean the compiler knows when to free almost everything.
I can think of:
etanercept
reply