… and neither does the author (or so I believe - I made them both up).
On the other hand, AI is definitely good at creative writing.
… and neither does the author (or so I believe - I made them both up).
On the other hand, AI is definitely good at creative writing.
I would argue it’s not the AI but the companies (that make the AI) making unattainable promises and misleading people.
Removed by mod
Guns are literally for killing like its all they do. Even for hunting the sole purpose is to kill. That’s not the case with LLMs, its just exclusively how these companies are using it as they have all the power to dictate terms in the workplace.
Removed by mod
I don’t agree with that. If you use it to destroy human creativity, sure that will be the outcome. Or you can use it to write boring ass work emails that you have to write. You could use it to automate boring tasks. Or a company can come along and automate creativity badly.
Capitalism is what’s ruining it. Capitalism is what is ruining culture, creativity, and the human experience more than LLMs. LLMs are just a knife and instead of making tasty food we are going around stabbing people.
and yeah people made guns just to put holes in pieces of paper, sure nothing else. If you do not know how LLMs work, just say so. There are enough that are trained on public data which do not siphon human creativity.
It is doing a lot of harm to human culture, but that’s more of how it’s being used, and it needs real constructive criticism instead of simply being obtuse.
Removed by mod
Sure, that’s exactly what I believe … Wow I’m so called out. I use it as a tool to do boring menial tasks so that I can spend my time on more meaningful things, like spending time with my family, making some dinner, spend time on the parts of my work I enjoy and automate the boring tedious parts, like writing boilerplate code that’s slightly different based on context.
Can you elaborate on how and the mechanisms by which this is happening as you see? Why do you see it that way? Do you not see any circumstances in which it could be useful? Like legitimately useful? Like have you not written a stupid tedious email to someone you didn’t like that you couldn’t be bothered to put more than 2 seconds to prompt it to some one or thing else to deal with it for you?
This is true it’s starting to eat its own tail. That also doesn’t mean all new models are using new data. It could also be using better architectures on the same data. But yes using ai generated data to train new ai is bad and you’ll end up creating nerfed less useful model that will probably hallucinate more. Doesn’t mean the tech isn’t useful cause you’ve not seen it used for anything good.
Removed by mod
Sounds like all your problems are with capitalism and not LLMs but you can’t see that.
And good for you that you’re in a position to not deal with bullshit in your work. Not everyone has that luxury.
Get some empathy for people in different circumstances as you. You sound like a child.
Also there’s a fuck ton of useful training data with permissive licenses. Also, fuck copyright law. It’s been weaponized by capitalists to control our lives. Especially cause the artists barely gets theirs.
We’re never gonna see eye to eye so don’t bother. Peace and love. Have a good day.
Is it the training process that you take issue with or the usage of the resulting model?
Removed by mod
The energy usage is mainly on the training side with LLMs. Generating afterwards is fairly cheap. Maybe what you want is to have fewer companies trying to train their own models from scratch and encourage collaborating instead?
Removed by mod
Indeed. Though what we should be thinking about is not just the cost in absolute terms, but in relation to the benefit. GPT-4 is one of the more expensive models to run right now, and you can accomplish very good results with their smaller GPT-4o mini at 0.5% of the energy cost[1]. That’s the cost of running 0.07 LED bulbs over an hour, or running 1 LED bulb over 0.07 hours (i.e. 5min). If that saves you 5min of time writing an email while the room is lit with a single LED bulb and your computer is drawing energy, that might just be worth it, right?
[1] Estimated by using https://huggingface.co/spaces/genai-impact/ecologits-calculator and the pricing difference between GPT-4o, 4o mini, and 3.5 (https://openai.com/api/pricing/). The assumption I’m making is that the total hardware and energy cost scales linearly with the API pricing.
Are you suggesting the AI would appear spontaneously without those companies existing?
Its the companies that are the problem.
Would these LLMs exist without the companies?
Is being immoral a prerequisite for producing such tech?
One doesn’t need to be… It can be used for useful things … Unlike what it’s used for now