Let me preface my comment by saying I also still write a lot of code by hand - especially when it's something I know I need to understand in depth, and in some cases defend.
With that said, this caught my eye:
> AI gravitates toward single-struct-holds-everything because it satisfies the immediate prompt with minimal ceremony.
This is too general. "AI" is used here as a catch-all, but in fact, it was the specific model under the specific conditions you ran your prompt, including harness, markdowns, PRDs, etc. So it's not fair to say "AI does X!" in this case.
It's also very much up to you. It's very common to have a frontier model plan an architecture before you have another model implement code. If you're just one-shotting an LLM to do everything you get mediocre, more brittle code.
This stuff is still being figured out by a lot of people. But I feel the core of the issue is not using AI well. Scoping, task alignment, validation, are crucial.
The battle is lost. You never had a chance. There's nothing you can do against the constant torrent of AI content that's only getting started. The online communities that we know and love are going to change and there's nothing we can do about it. You can't keep AI out of any platform no matter what the community guidelines say or even if it seems locked down with no bot access.
The only solution is in person meetups, bringing back the 3rd places, joining a club. Maybe it's not such a bad outcome.
It will be a beautiful day when I can finally lose all my Adobe accounts and software. Kdenlive is definitely on the right track BUT having a real risk to lose my project after days and weeks of work is not something I am able to afford. I am following this with great interest and waiting for the right time to jump on board.
Where did you hear about loosing your work? Did you experience it? Did you report it? Kdenlive has a very robust project recovery system, even if it crashes you are able to recover your lost work. Also in any software you must continuously save.
These are thoughts of someone who's very good at putting words together, but sadly has little experience with the subject matter.
> I’ve thought about this a lot over the last few years, and I think the best response is to stop.
This is exactly where it shows.
LLMs, agents and whatever comes next are not only the future of tech, but they are going to be national resilience drivers for the countries that will be able to support them with power, water and science.
Who is supposed to stop? The US? China? Russia? Everyone? Of course this won't happen. This is an arms race.
But even if it weren't, stopping is the wrong answer. You don't have to outsource your thinking, writing or reading. How you use LLMs is entirely up to you.
There is a way to use LLMs which is beneficial. I treat them as a private tutor available to me for questions. This solved a lot of friction I had with my relationship with LLMs.
More telling is that the author mainly thinks about their relationship with LLMs while in reality the space has moved on to automation with agents. You don't interact with LLMs as much as before, and if you still do, then soon you won't.
Ahents are not really ML. It's harnesses and parsing and memory and metrics. It's software. Should we stop this as well?
Ollama is the worst engine you could use for this. Since you are already running on an Nvidia stack for the dense model, you should serve this with vLLM. With 128GB you could try for the original safetensors even though you might need to be careful with caches and context length.
Strangely, I haven't had a lot of luck with vLLM; I finally ended up ditching Ollama and going straight to the tap with llama-serve in llamacpp. No regrets.
> End of the PC era, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.
This one seems too far fetched. Training models is widespread. There will always be open weight models in some form, and if we assume there will be some advancements in architecture, I bet you could also run them on much leaner devices. Even today you can run models on Raspberry Pis. I don't see a reason this will stop being a thing, there will be plenty of ways to tinker.
However, keep in mind the masses don't care about tinkering and never have. People want a ChatGPT experience, not a pytorch experience. In essence this is true for all tech products, not just AI.
It works fine on mac (that's what we developed it on) and it's not nearly as much overhead as I was initially expecting. There's probably some added latency from virtual box but it hasn't been noticeable in our usage.
The top Mac Studio has six thunderbolt 5 ports, each of which is a PCIe 4.0 x4 link. Each is a 8GB/sec link in each direction, which is a lot. Going from x16 down to x4 has less than a 10% hit on games: https://www.reddit.com/r/buildapc/comments/sbegpb/gpu_in_pci...
“In the more common situations of reducing PCI-e bandwidth to PCI-e 4.0 x8 from 4.0 x16, there was little change in content creation performance: There was only an average decrease in scores of 3% for Video Editing and motion graphics. In more extreme situations (such as running at 4.0 x4 / 3.0 x8), this changed to an average performance reduction of 10%.”
Oculink is generally faster than TB5 despite them both using PCIe 4.0, because Oculink provides direct PCIe access whereas Thunderbolt has to route all PCIe traffic through its controller. The benchmarks show that the overhead introduced by the TB5 controller slows down GPU performance.
It's not just the controllers; the Thunderbolt protocol itself imposes different speed limits. The bit rates used by Thunderbolt aren't the same as PCIe, and PCIe traffic gets encapsulated in Thunderbolt packets.
Maybe; I'm unable to find any benchmarks that specifically compare PCs with TB to Macs to test this. But there is certainly still overhead with TB no matter what, and therefore it'll never be as fast as Oculink.
Sure, but how big of a difference is there? Even inside a desktop PC, you typically have PCIe ports directly off the CPU and ones off the chipset, and the latency for the latter is double. But the difference is immaterial in practice.
I think latency is the wrong focal point (more important for gaming, plus Macs don't support eGPUs anymore). There aren't a lot of general workloads that require high sustained throughput, but the ones that do can benefit from TB5 scaling.
For instance, if you cluster Mac Studios over TB5 with RDMA, the performance can be pretty stellar. It may not be more cost effective than renting compute for the same tasks, but if you've got (up to) four M3 Ultras with a ton of RAM, you'll be hard pressed to find something similar.
That's still not more ideal than having native alternatives like OCuLink or something that can be networked like QSFP, but it's a fair way to highlight the current design's strengths.
That's just blatantly wrong, the performance loss of GPUs is very well documented and gets worse as you go towards higher end models. We're talking 30%+ loss of performance here.
Sure. And lots of people need all that I/O. But my point is that it’s not like the Mac Studio has no I/O. The outgoing Mac Pro only has 24 total lanes of PCIe 4.0 going to the switch chip that’s connected to all the PCI slots. The advent of externally route PCIe is a development in the last few years that may have factored into the change in form factor.
When people talk about 100gigabit networks for Macs, im really curious what kind of network you run at home and how much money you spent on it. Even at work I’m generally seeing 10gigabit network ports with 100gigabit+ only in data centers where macs don’t have a presence
Local AI is probably the most common application these days.
Apple recently added support for InfiniBand over Thunderbolt. And now almost all decent Mac Studio configurations have sold out. Those two may be connected.
100 Gb/s Ethernet is likely to be expensive, but dual-port 25 Gb/s Ethernet NICs are not much more expensive than dual-port 10 Gb/s NICs, so whenever you are not using the Ethernet ports already included by a motherboard it may be worthwhile to go to a higher speed than 10 Gb/s.
If you use dual-port NICs, you do not need a high-speed switch, which may be expensive, but you can connect directly the computers into a network, and configure them as either Ethernet bridges or IP routers.
I work in media production and I have the same thought constantly. Hell I curse in church as far as my industry is concerned because I find 2.5 to be fine for most of us. 10 absolutely.
100gbps is going to be for mesh networks supporting clusters (4 Mac Studios let's just say) - not for LAN type networks (unless it's in an actual datacenter).
I suppose the throughput is not the key, latency is. When you split ann operation that normally ran within one machine between two machines, anything that crosses the boundary becomes orders of magnitude slower. Even with careful structuring, there are limits of how little and how rarely you can send data between nodes.
I suppose that splitting an LLM workload is pretty sensitive to that.
Things that aren’t graphics cards, such very high bandwidth video capture cards and any other equipment that needs a lot of lanes of PCI data at low latency.
Multiple GPUs was tried, by the whole industry including Apple (most notably with the trash can Mac Pro). Despite significant investment, it was ultimately a failure for consumer workloads like gaming, and was relegated to the datacenter and some very high-end workstations depending on the workload.
Multi-GPU has recently experienced a resurgence due to the discovery of new workloads with broader appeal (LLMs), but that's too new to have significantly influenced hardware architectures, and LLM inference isn't the most natural thing to scale across many GPUs. Everybody's still competing with more or less the architectures they had on hand when LLMs arrived, with new low-precision matrix math units squeezed in wherever room can be made. It's not at all clear yet what the long-term outcome will be in terms of the balance between local vs cloud compute for inference, whether there will be any local training/fine-tuning at all, and which use cases are ultimately profitable in the long run. All of that influences whether it would be worthwhile for Apple to abandon their current client-first architecture that standardizes on a single integrated GPU and omits/rejects the complexity of multi-GPU setups.
I see AI as a new, unreliable resource that I can try and tame with good software practices. It's an incredibly fun challenge and there's a lot to learn.
With that said, this caught my eye:
> AI gravitates toward single-struct-holds-everything because it satisfies the immediate prompt with minimal ceremony.
This is too general. "AI" is used here as a catch-all, but in fact, it was the specific model under the specific conditions you ran your prompt, including harness, markdowns, PRDs, etc. So it's not fair to say "AI does X!" in this case.
It's also very much up to you. It's very common to have a frontier model plan an architecture before you have another model implement code. If you're just one-shotting an LLM to do everything you get mediocre, more brittle code.
This stuff is still being figured out by a lot of people. But I feel the core of the issue is not using AI well. Scoping, task alignment, validation, are crucial.
reply