I’ve noticed an uptick in the number of pro-AI posts on this platform.
Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X”
Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.


Current AI is unsuitable, but automation of some kind (maybe not AI) will be necessary for a nearly workless future. Life is kind of dumb as is, it’s better if we spent time in the gym, or doing yoga, or learning something, instead of spending life in the pesticide factory, then dying after 3 years of retirement from a horrific disease.
We already had (pre-2020) all the automation we needed to work less than 20 hr/wk and produce all the necessarily calories, fresh water, and housing for everyone. But, instead we chose to turn a few people into decabillionaires and continue to bicker over the scrap like we weren’t in a post-scarcity society.
LLMs, transformers, convolution layers, characteristic tensors, etc. all have some legitimately novel uses, but all the big “AI” product lines are unethically developed, irresponsibly deployed, and dishonestly marketed.
If you want an ethical chatbot, I recommend https://en.wikipedia.org/wiki/Apertus_(LLM) .
I don’t know of a ethical model that’s good for images or code, yet, but I know people are working on them. The IBM Gemini models are getting close, but I don’t know if IBM will ever get the training data completely “clean” / open / free.
I’ve been told that StarCoder is an ethically-trained free software model, but some of my research ( https://mot.isitopen.ai/model/StarCoder ) contradicts that assertion, and I’ve not looked into it deep enough to resolve that conflict myself. (IMO, we don’t actually need automated code generation, we need to write less code in better languages with better tests and more reuse; but you may not agree.)