

4·
2 days agoThat’s why you split the ship in two and spin the habitation module around the heavier part of the ship¹, connected by a tether, as in Project Hail Mary (which the video says is still too fast… so just make the tether longer).
- Well, around their common barycentre, but you know what I mean.

Exactly. When it comes to code, for instance, what percentage of the training data is Knuth, Carmack, and similarly skilled programmers, and what percentage is spaghetti code perpetrated by underpaid and uninterested interns?
Shitty code in the wild massively outweighs properly written code, so by definition an LLM autocomplete engine, which at best can only produce an average of its training model, will only produce shitty code. (Of course, though, average or below average programmers won’t be able — or willing — to recognise it as shitty code, so they’ll feel like it’s saving them time. And above average programmers won’t have a job anymore, so they won’t be able to do anything about it.)
And as more and more code is produced by LLMs the percentage of shitty code in the training data will only get higher, and the shittiness will only get higher, until newly trained LLMs can only produce code too shitty to even compile, and there will be no programmers left to fix it, and civilisation will collapse.
But, hey, at least the line went up for a while and Altman and Huang and their ilk will have made obscene amounts of money they didn’t need, so it’ll have been worth it, I suppose.