

I aint just talking about inference. Training costs are insane and models have to be updated to be used well with new languages, libraries, etc.
I aint just talking about inference. Training costs are insane and models have to be updated to be used well with new languages, libraries, etc.
Energy and water costs for developmenr and usage alone are completely incompatible with that. Come back in 20 years when it’s not batshit insane ecologically.
Not to mention reducing power usage of programs isnt going to be very feasible based on simply an LLM’s output. LLMs are biased twoards common coding patterns and those are demonstrably inefficient (if the scourge of web apps based on electron is any tell). Thusly your code wouldn’t work well with lower grade hardware. Hard sell.
Theoritically they could be an efficient method of helping build software in the future. As it is now that’s a pipe dream.
More importantly, why is the crux of your focus on not understanding the code you’re making. It’s intrinsically contrived from the perspective of a solarpunk future where applications are designed to help people efficiently - without much power, heat, etc… weird man
True. I wish expectations aligned more with this kind of practice… I see a lot of people thinking in binaries about these kinds of issues: that either we should just trust (insert company here) completely or distrust… People should learn to demand accountabillity and transparency instead of just eating corporate drivel or becoming skeptic of Anything Ever.
Eh. Fediverse is a portmanteau of federation and universe. It’s perfectly fine and not misleading imho to suggest other federation-capable services in this context. Especially considering Matrix is definitely part of the wider cultural context underpinning the transition of some groups to using these counterculture services (it’s often frequently recommended in these contects, often used by people on mastoson etc anyways),
That’s a lot better than it could be. But I’m also talking about training costs. Models have to be updated to work swimmingly with new languages, conventions, libraries, etc. Models are not future-proof.
There are more efficient training methods being employed. See: the stuff R1 used. And existing models cam be retooled. But it’s still an intrinsic problem.
Perhaps most importantly it’s out of the reach of common consumer grade hardware to train a half decent LLM from scratch. It’s a tech that exists mostly in the scope of concentrated power among peoole who care little for their enviromental ramifications. Relying on this in the short term puts influence and power in the hands of people willing to burn our planet. Quite the hard sell, as you might imagine.
Also see: the other points I made