I’ve noticed an uptick in the number of pro-AI posts on this platform.

Various posts with titles similar to “When will people stop being afraid of AI” or “Can we please acknowledge AI was very needed for X

Can’t tell if its the propaganda machine invading, or annoying teenage tech-bros who are detached from reality.

  • bss03@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    We already had (pre-2020) all the automation we needed to work less than 20 hr/wk and produce all the necessarily calories, fresh water, and housing for everyone. But, instead we chose to turn a few people into decabillionaires and continue to bicker over the scrap like we weren’t in a post-scarcity society.

    LLMs, transformers, convolution layers, characteristic tensors, etc. all have some legitimately novel uses, but all the big “AI” product lines are unethically developed, irresponsibly deployed, and dishonestly marketed.

    If you want an ethical chatbot, I recommend https://en.wikipedia.org/wiki/Apertus_(LLM) .

    I don’t know of a ethical model that’s good for images or code, yet, but I know people are working on them. The IBM Gemini models are getting close, but I don’t know if IBM will ever get the training data completely “clean” / open / free.

    I’ve been told that StarCoder is an ethically-trained free software model, but some of my research ( https://mot.isitopen.ai/model/StarCoder ) contradicts that assertion, and I’ve not looked into it deep enough to resolve that conflict myself. (IMO, we don’t actually need automated code generation, we need to write less code in better languages with better tests and more reuse; but you may not agree.)