I’m sick and tired of the Capitalist religion and all their fanatic believers. The Western right-wing population are the most propagandized and harmful people on Earth.

  • 0 Posts
  • 10 Comments
Joined 3 years ago
cake
Cake day: June 8th, 2023

help-circle






  • Seem to me that peeps outside of the AI development sphere/interest are not aware of how quickly ‘flaws’ gets fixed. There are still people that don’t think AI will ever be useful - or intelligent - based on some ‘archaic’ performance from many months ago. Reality will hit hard I think.

    Personally, I have never seen any development move faster than artificial intelligence, and whatever it can’t do ‘properly’ today, it can do tomorrow or the day after.

    Current AI/Agentic status is the clawd family of frameworks + a sota model. However, they are really stupid architectures (Every 30 minutes, the llm is yanked back and presented with the original tasks in an md file - that’s it) and are WAY behind what we can do according to papers/newest development. Papers quickly trickles down to architectures tho, and the next family of agentic frameworks will strike as fast as the clawd phenomenon.

    We are not far from general AI - not particularly from llms/transformers, but from the external cognitive ‘harness’ that are build all over. While the harness adds cognitive states to the architecture, many of the typical agentic features are being build into the model itself, so the the cognitive functionality of the harness, are being injected into the models, and the new harness fixes other ‘flaws’. We will see one clawd moment after another, faster and faster, getting better and better…

    I hope peeps live in a society that takes care of each other, and don’t treat each other as lazy bums that “just wouldn’t work hard enough”. It’s going to be horrible to peeps in US and similar Capitalist ‘might is right’ societies. There is NO safety net for ‘failure’ there.

    Back to article: It was bound to happen within a year or so.




  • How hard can it be to have an AI take PR’s from other AI’s and clean out the worst + plus hardening PR protocols ? It could even assist/guide AI contributors via a special AI-contributor forum or whatever. AI are currently high-lighting a lot of ‘holes’ in systems where we expect a certain behavior. Just coping/complaining and closing things off is a bad decision, and we should accept these flaws in our systems and adapt them to a new world. The sooner the better.

    The projects that get it right, now have an army of managed AI contributors, and a filtered/educational AI PR pipeline where project maintainers cherry-pick the top creme de la creme…