I’ve noticed an uptick in the number of pro-AI posts on this platform.
Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X”
Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.


Humans are social animals, in the United States especially where people are severely separated- they’ll look for and find any kind of easy access towards social interactions: including but not limited to Chat bots. It’s a sad reality that they would dismiss the negative affects it has on our social brains, dismiss the environmental effects it has on our planet, dismiss the social warmings because they’re too involved with LLMS “AI”.
That’s right, it’s not even AI; it’s only large language models or some agentic systems. Way smaller ones existed in the past, think Dr. Sbaitso (1992) or A.L.I.C.E. (1995.) it’s actually not hard to make a chat bot, just have it echo what the user says with some key phrases. That’s the whole existence of chat bots and today’s current “ai” only they have a LOT more variables that were generated off of huge randomly generated data sets (both off of free open sources and stolen data) and that’s what causes it to hallucinate: it’s the randomness that humans don’t have the ability to change or update simply because it’s such a huge list of variables. It’s so massive people think it’s real intelligence! PEOPLE WERE FOOLED ON 1990’s CHART BOTS TOO! 😭 😂
Anywho we recommend the movies Desk Set, Space Odyssey, pi and even Alphaville. They’re related to the subject and they’re pretty good at pointing out the bruhs.
Sure, sure. So when LLMs find 0days that have been around for a decade, they’re just cold reading and stroking the sloperator’s ego. Got it.
If your point was to say “LLMs are good because it can hack into people’s PCs and make the world worse” I think you gotta start setting priorities towards finding some empathy.
Besides it was not discovered by an LLM or AI. It was discovered by Taeyang Lee, researcher at Theori and then later refined into an exploit chain by the Xint Code Research Team, whom both used an “AI”-assisted analysis. So no LLMs didn’t magically find a decade old exploit, LLMs simply was used as a search function based on its trained module of the past coding assets and the logic bug in the Linux kernel.
So yeah it’s basically a glorified search function at that point and if you can find peace fucking a search bar- hey man that’s your thing 🤷🏻♀️
Our sources:
Holy shit, are you a professional strawman builder? Because you’re really good.
An LLM helped fix a bug. That’s all we need to know. It’s useful. Saying so has nothing to do with empathy, lack thereof, or robosexuality or whatever the shit kids are in to these days.