Not understanding how to use new technology, even flawed ones, isn’t a flex
I understand LLMs well enough that I really don’t want to use them because they are inherently incapable of judging the validity of information they are passing along.
Sometimes it’s wrong. Sometimes it’s right. But they don’t tell you when they’re wrong, and to find out if they were wrong, you now have to do the research you were trying to avoid in the first place.
I tried programming with it once, because a friend insisted it was good. But it wasn’t, and it was extremly confidend, while being exceptionally wrong.
Congrats, then don’t use it to validate information.
LLMs are incredible text generators. But if you are going to judge a fish by its ability to climb a tree, then you are never going to find its potential.
Yes, there are tons of bogus AI implementations. But that doesn’t say anything about the validity of the technology. Look at what VLC is doing with it for example.
It is pretty clear by those statements that you understand LLMs less than what you claim.
But it wasn’t, and it was extremly confidend, while being exceptionally wrong.
TIL I’m a LLM
Sounds like you’re the most replaceable. Bye bye, income.
Yeah using it for that reason would be using it wrong. They’re pretty decent at name/description generation. That type of thing. Or it can point you to where to actually look to figure things out - almost like having it design a teaching plan.
Yeah thats called ignorance and we shouldn’t be celebrating it.
Maybe not Chat GPT specifically, but you can hardly use the internet without some AI being pushed on you.
There’s a difference between passively using something and actively using something.
I use electricity every day, but I have no idea how it’s generated. I (assume I) use RSA256, but if you ask me to explain block cypher encryption to you I’d just go “well you take a number and another number and… hope they have sex to produce a bigger number?”
I use a lot of stuff without having to know how it works and having to choose to use it.
It’s so strange seeing people being proud that they can’t keep up with the technologies.
Yeah, that’s just judgemental and presumptive.
I have quite a lot of shit in my life, and I have actively decided to pay no attention to AI. Not because “I can’t keep up with it” but because after some research into it I decided “it was bullshit and nonsense and not something I need to know about”
Mate, I don’t know you and I don’t care about you. Stop talking about yourself for a second. You posted a screenshot where the person said “I have never even tried it.” That’s it.
Cus you don’t need to try something to have done some research on why you wouldn’t. A basic, cursory search of ‘what is chat gpt’ would get you LLM, and a basic cursory search of LLM gets you ‘machine designed to make shit up whenever it doesn’t know the answer, meaning you can’t trust it’. Not to mention they’ve probably seen at least one screenshot of it failing miserably at counting letters in a word. Or they’ve seen the ai answers Google shoves down their throat and learned from basic word of mouth that GPT is More of That. The only thing they said they don’t know is WHERE GPT is, or how it’s accessed.
Look, this is your life. If you want to be miserable, angry and left behind - fine, I don’t care. You don’t need my permission. So stop wasting your and my time.
All due respect, you started this.
You attacked me for writing a post. Clearly you care enough to do that.
“People being proud that they can’t keep up with the technologies.” is a bit of a slap but it’s hardly an attack.
If you’re going to post on the internet you have to expect people to give you some level of shit.
That wasn’t quite the point I was making :)
I used to know a guy like that. He would say stuff like “I didn’t even know how to use a computer mouse!” It definitely sounded like he was bragging. Such a weird thing to be proud of.
technically a keyboard is faster, but yeh weird flex
I feel like it’s an unpopular take but people are like “I used chat gpt to write this email!” and I’m like you should be able to write email.
I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It’s a skill that needs practice.
This is a reality as most people will abandon those skills, and many more will never learn them to begin with. I’m actually very worried about children who will grow up learning to communicate with AI and being dependent on it to effectively communicate with people and navigate the world, potentially needing AI as a communication assistant/translator.
AI is patient, always available, predicts desires and effectively assumes intent. If I type a sentence with spelling mistakes, chatgpt knows what I meant 99% of the time. This will mean children don’t need to spell or structure sentences correctly to effectively communicate with AI, which means they don’t need to think in a way other human being can understand, as long as an AI does. The more time kids spend with AI, the less developed their communication skills will be with people. GenZ and GenA already exhibit these issues without AI. Most people go experience this communicating across generations, as language and culture context changes. This will emphasize those differences to a problematic degree.
Kids will learn to communicate will people and with AI, but those two styles with be radically different. AI communication will be lazy, saying only enough for AI to understand. With communication history, which is inevitable tbh, and AI improving every day, it can develop a unique communication style for each child, what’s amounts to a personal language only the child and AI can understand. AI may learn to understand a child better than their parents do and make the child dependent on AI to effectively communicate, creating a corporate filter of communication between human being. The implications of this kind of dependency are terrifying. Your own kid talks to you through an AI translator, their teachers, friends, all their relationships could be impacted.
I have absolutely zero beleif that the private interests of these technology owners will benefit anyone other than themselves and at the expense of human freedom.
it’s the same with younger ppl today that have no idea how to navigate files and directories because an app with an interface does everything for them.
quite sad actually
I know someone who very likely had ChatGPT write an apology for them once. Blew my mind.
I use it to communicate with my landlord sometimes. I can tell ChatGPT all the explicit shit exactly as I mean it and it’ll shower it and comb it all nice and pretty for me. It’s not an apology, but I guess my point is that some people deserve it.
You don’t think being able to communicate properly and control your language, even/especially for people you don’t like, is a skill you should probably have? It’s not that much more effort.
I can and I do, but I don’t think he’s worth the effort specifically. Lol
Why waste the brain power when the option exists not to?
Because brains literally need exercise, and conversations with other real humans are the best kind it can get, so you’re literally speedrunning an increased potential of dementia and alzheimers with every fake email.
but why learn how to socialize when I can have an AI gf? \s
But you do not need to do this for every case, no? Some deserves simply formal answers. Ofc one should not do this with bosses
Bwcause it doesn’t take that much effort and it does then maybe you should practice more.
Why waste time say lot word when few word do trick?
Some people are very proud of not knowing things
There’s a difference between not knowing something because of ignorance and not knowing something because you know you don’t need to know it.
I have no idea how to rebuild a combustion engine.
Is that something of which I should be ashamed? Or something I have actively chosen not to learn because when will I ever need to know it?
Would you be the sort of person to proudly proclaim their lack of knowledge about combustion engines at the time they became a thing?
To be honest? Yeah.
In my last job before this one I learned a lot of stuff about a topic I needed to know for that job.
But now I have a new job I don’t need to know any of that stuff. So I am slowly forgetting it because I don’t use it. And instead I am learning a lot of stuff about things I need for my new job.
And in the midst of all of this why would I take the time to learn something I am never going to use. At all. Ever. I have far too much stuff to learn and remember, and why I would need to learn how to plug the camshaft into the reverse socket twink-phlange?
I am not afraid of technology. It doesn’t scare me. I am not sitting in a cave railing against these kids with their short skirts and their long hair and their music and “they didn’t do these things in my day”
I just made what I consider to be a fairly educated judgement call that this is something I don’t need to care about.
This isn’t about not needing to learn. If you don’t see any use for this newfangled internal combustion then why learn about whether it is a tiny horse or whatever. But this telling people with pride how little you know is almost always eyeroll worthy. Like wow very cool you don’t know something…
Hold on a moment – I don’t go around with a sandwich board on my chest or emailing every person I know.
I brought it up here because I thought it relevant to the topic. But if we were talking about hockey or baseball or what makes different clouds form at different levels then I wouldn’t have mentioned it.
And pride? Again I just mentioned it because it’s something relevant to the topic. I could easily have said I have no clue how nuclear reactors work or how to perform open heart surgery on a human being.
Would that have been boasting about my lack of knowledge? Or does not knowing about how to perform open heart surgery seem relatively normal?
I was just making a point that a lot of people don’t bother with some knowledge because it is shit they don’t need to know. And right now using AI tools is in that category for quite a large percentage of the population.
This is about the person in the original post
The person in the original post didn’t mention combustion engines.
Feels like it is a little about me.
Yes, you should be ashamed. As a tool that you probably depend on every day, you need to have some sort of basic understanding, to do basic troubleshooting, and to have a vague understanding what your mechanic tells you
One could say the same about the TV, about the internet, about block cypher encryption, about the economy, about the local sewage system, about the local water and electricity systems, about all sorts of things that we rely on every day.
Oh then there’s the boiler, the cooker, the microwave, the fridge, the telecommunications network…
At what point do I go “huh – maybe I should leave this up to people who went to school to learn about it” rather than trying to learn even the basics about everything that could go wrong in my life when there are CLEARLY people who know more about it than I do and are paid to know more about it than I do?
Every one of those, you should know how to operate them, have done basic knowledge how they work, and be able to troubleshoot basic problems.
And yes, most importantly you should understand when it’s a user issue and when
At what point do I go “huh – maybe I should leave this up to people who went to school to learn about it”
The amount of times I’ve seen a question answered by “I asked chatgpt and blah blah blah” and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea
This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.
And facts don’t care about your LLM’s feelings
A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.
Why not just read the first part of a wikipedia article if they want that though? It’s not the end all source but it’d better than asking the machine known to make things up the same question.
Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.
The stupid and the lazy.
Hey, I may be stupid and lazy, but at least I don’t, uh, what were we talking about?
Oh look, it’s the LLMentalist o’clock!
That was a great read! Thanks for padding my RSS feed ever so more!
Yeah, don’t use a hallucinogenic machine for truth about the universe. That is just asking for trouble.
Use it to give you new ideas. Be creative together. It works exceptionally well for that.
be creative.
nature already has a solution for that and they’re called drugs
I don’t see the point either if you’re just going to copy verbatim. OP could always just ask AI themselves if that’s what they wanted.
We’re in a post truth world where most web searches about important topics give you bullshit answers. But LLMs have read basically all the articles already and has at least the potential make deductions and associations about it - like this belongs to “propaganda network 4335”. Or “the source of this claim is someone who has engaged in deception before”. Something like a complex fact check machine.
This is sci-fi currently because it’s an ocean wide but can’t think deeply or analyze well, but if you press GPT about something it can give you different “perspectives”. The next generations might become more useful in this in filtering out fake propaganda. So you might get answers that are sourced and referenced and which can also reference or dispute wrong answers / talking points and their motivation. And possibly what emotional manipulation and logical fallacies they use to deceive you.
Hey MuskAI, is this verifiable fact about Elon’s corruption true?
No, that’s fake news. Here’s a few conspiracy blogs that prove it. Buy more Trump Coin 💰🇺🇸
Respectfully, you have no clue what you’re talking about if you don’t recognize that case as the exception and not the rule.
Many of these early generation LLMs are built from the same model or trained on the same poorly curated datasets. They’re not yet built for pushing tailored propaganda.
It’s trivial to bake bias into a model or put guardrails up. Look at deepseek’s lock down on any sensitive Chinese politics. You don’t even have to be that heavy handed, just poison the training data with a bunch of fascist sources.
You are arguing there is a possibility it will go that way, while I was talking about a possibility of a more advanced AI that is open source, has verifiable arguments with sources. While the negative outcome is very important, you’re practically dog-piling me to suppress a possible positive outcome.
RIGHT NOW even without AI the vast majority of people are simply unable to perceive reality on certain important topics. Because of propaganda, polarization, profit seeking through clickbait, and other effects. You can’t trust, and you can’t verify because you ain’t got the time.
My argument is that a more advanced and open source AI could provide reliable information because it has the capability to filter and analyze a vast ocean of data.
My argument is that this potential capability might be crucial to escape the current (non AI) misinformation epidemic. What you are arguing is not an argument against what I’m arguing.
I apologize if my phrasing is combative; I have experience with this topic and get a knee-jerk reaction to supporting AI as a literacy tool.
Your argument is flawed because it implicitly assumes that critical thinking can be offloaded to a tool. One of my favorite quotes on that:
The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place.
(coincidentally from an article on the topic of LLM use for propoganda)
You can’t “open source” a model in a meaningful and verifiable way. Datasets are massive and, even if you had the compute to audit them, poisoning can be much more subtle than explicitly trashing the dataset.
For example, did you know you can control bias just by changing the ordering of the dataset? There’s an interesting article from the same author that covers well known poisoning vectors, and that’s already a few years old.
These problems are baked in to any AI at this scale, regardless of implementation. The idea that we can invent a way out of a misinformation hell of our own design is a mirage. The solution will always be to limit exposure and make media literacy a priority.
Hmm very interesting info, thanks. Research about biases and poisoning is very important, but why would you assume this can’t be overcome in the future? Training advanced AI models specifically to understand the reasons behind biases and be able to filter or mark them.
So my hope is that it IS technically possible to develop an AI model that can both reason better and analyze news sources, journalists, their affiliations, their motivation and historical actions, and can be tested or audited against bias (in the simplest case a kind of litmus test). And to use that instead of something like google and integrated in the browser (like firefox) to inform users about the propaganda around topics and in articles. I don’t see anything that precludes this possibility or this goal.
The other thing is that we can’t expect a top down approach to work, but the tools need to be “democratic”. And an advanced, open source, somewhat audited AI model against bias and manipulation could be run locally on your own solar powered PC. I don’t know how much it costs to take something like deepseek and train a new model on updated datasets, but it can’t be astronomical. It only takes at least one somewhat trustworthy project to do this. That is a much more a bottom up approach.
Those who have and seek power have no interest in limiting misinformation. The response to the misinformation by Trump and MAGA seems to have led to more pressure on media conglomerates to be in lockstep and censor anything that is dissent (the propaganda model). So expecting those in power to make that a priority is futile. Those who only seek power are statistically more likely to achieve it, and they will and are using AI against us already.
Of course I don’t have all the answers, and my argument could be put stupidely as “The only thing that can stop a bad AI with a gun is a good AI with a gun”. But I see “democratizing” AI as a crucial step.
Wait, people actually try to use gpt for regular everyday shit?
I do lorebuilding shit (in which gpt’s “hallucinations” are a feature not a bug), or I’ll just ramble at it while drunk off my ass about whatever my autistic brain is hyperfixated on. I’ve given up on trying to do coding projects, because gpt is even worse at it than I am.
I have encountered some people who use it as a substitute for thinking. To the extent that it’s rather unnerving.
They absolutely do. Some people basically use it instead of Google or whatever. Shopping lists, vacation planning, gift lists, cooking recipes, just about everything.
It’s great at it, because it’ll bother trawling webpages for all that stuff that you can’t be bothered to spend hours doing. The internet is really soo shitified that it’s easier to use a computer to do this.
I hate that it is so. It’s a complete waste of ressources, but I understand it.
It’s a waste of your resources to close popups, set cookie preferences and read five full screens about grandma’s farm before getting to the point: “Preheat the oven to 200°c and heat the pizza for 15 minutes.”, when ChatGPT could’ve presented it right away without any ads.
Brought to you by chrome being the biggest browser and it willfully enshittifying adblockers, which incidentally made searching way more tedious and funneled people to LLMs.
I think the AI hype will die when it gets enshittified enough.
At first they’ll start injecting sponsored results, then it’s going to be pay up or watch 3 adds before access, while still showing the product-placed content regardless.
Companies will be offered a “professional subscription” just to use it without the first two ads.
Right now, it’s free, because they want people to get addicted to he ease of answers that was previously supplied by search engines.
At no point will they ever generate the content that people are searching for. Googles main mission is to obstruct people from getting that.
Best case, wiith enough competition, AI is just going to be another layer to see through, because previous versions of the internet has been made unusable by advertising.
Yeah, it’s a useful tool if you are cautious and know what it can and cannot do. The issue is when I have to tell my relatives that using ChatGPT to do your taxes in a poor idea. As long as you know what you’re doing, there’s no reason to demonize its use (outside of environmental factors and infringement of intellectual property. heh…).
It’s perfectly good for things where you don’t need an accurate answer, but just can’t be bothered to do yourself. A good question would be like “I’m going on a weekend trip to Paris with my wife. Give me two choices with differently priced options for schedules Friday to Monday with the main attractions… both options should be less than €500”
It’d take about a weekend to do that research manually with any search engine. AI can deliver it within minutes. So, forget about having it do fool proof explicit programming, in which it would fail. It’s force is in doing cumbersome mondane stuff fast.
I don’t get how so many people carry their computer illiteracy as a badge of honor.
Chatgpt is useful.
Is it as useful as Tech Evangelists praise it to be? No. Not yet - and perhaps never will be.
But I sure do love to let it write my mails to people who I don’t care for, but who I don’t want to anger by sending my default 3 word replies.
It’s a tool to save time. Use it or pay with your time if you willfully ignore it.
Tech illiteracy. Strong words.
I’m a sysadmin at the IT faculty of a university. I have a front row seat to witness the pervasive mental decline that is the result of chatbots. I have remote access to all lab computers. I see students copy-paste the exercise questions into a chatbot and the output back. Some are unwilling to write a single line of code by themselves. One of the network/cybersecurity teachers is a friend, he’s seen attendance drop to half when he revealed he’d block access to chatbots during exams. Even the dean, who was elected because of his progressive views on machine learning, laments new students’ unwillingness to learn. It’s actual tech illiteracy.
I’ve sworn off all things AI because I strongly believe that its current state is a detriment to society at large. If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly. I will learn every skill that I need, without depending on AI. If you think that makes me an old man yelling at clouds, I have no kind words in response.
x 1000. Between the time I started and finished grad school, Chat GPT had just come out. The difference in students I TA’d at the beginning and end of my career is mind melting. Some of this has to do with COVID losses, though.
But we shouldn’t just call out the students. There are professors who are writing fucking grants and papers with it. Can it be done well? Yes. But the number of games talking about Vegetative Electron Microscopy, or introductions whose first sentence reads “As a language model, I do not have opinions about the history of particle models,” or completely non sensical graphics generated by spicy photoshop, is baffling.
Some days it held like LLMs are going to burn down the world. I have a hard time being optimistic about them, but even the ancient Greeks complained about writing. It just feels different this time, ya know?
ETA: Just as much of the onus is on grant reviewers and journal editors for uncritically accepting slop into their publications and awarding money to poorly written grants.
Speaking of being old, just like there are noticeable differences between people growing up before or after ready internet access. I think there will be a similar divide between people who did their learning before or after llms.
Even if you don’t use them directly, there’s so much more useless slop than there used to be online. I’ll make it five minutes into a how-to article before realizing it doesn’t actually make any sense when you look at the whole thing, let alone have anything interesting or useful to say.
If a person, especially a kid, is not forced to learn and think, and is allowed to defer to the output of a black box of bias and bad data, it will damage them irreversibly.
I grew up, mostly, in the time of digital search, but far enough back that they still resembled the old card-catalog system. Looking for information was a process that you had to follow, and the mere act of doing that process was educational and helped order your thoughts and memory. When it’s physically impossible to look for two keywords at the same time, you need to use your brain or you won’t get an answer.
And while it’s absolutely amazing that I can now just type in a random question and get an answer, or at least a link to some place that might have the answer, this is a real problem in how people learn to mentally process information.
A true expert can explain things in simple terms, not because they learned them in simple terms or think about them in simple terms, but because they have to ability to rephrase and reorder information on the fly to fit into a simplified model of the complex system they have in their mind. That’s an extremely important skill, and it’s getting more and more rare.
If you want to test this, ask people for an analogy. If you can’t make an analogy, you don’t truly understand the subject (or the subject involves subatomic particles, relativity or topology and using words to talk about it is already basically an analogy)
This is about a posts where someone proudly proclaims how they know almost nothing about ChatGPT. It’s one thing to decide not to use it because you know it’s this or that, but this is about someone being proud about not knowing anything about it.
Saying you heard of it but don’t even try it and then brag on social media about it is different than trying it and then deciding it’s not worth it/more trouble than it’s worth.
Do I see it as detrimental to education? Definitely, especially since teachers are not prepared for it.
I haven’t tried it either. Not even as a joke. I didn’t need to. I’ve seen its effects and came to a conclusion: that I would reject AI and whatever convenience it might bring in order to improve my own organic skills.
It’s not a terrible tool if you already have critical thinking skills and can analyze the output and reject the nonsense. I consider it an ‘idea’ machine as it was sometimes helpful when coding to give me a new idea, but I never used what it spit out because it writes nonsensical code far too frequently to be trusted. The problem is that if you don’t already know what you’re doing, you don’t have the skills to do that critical analysis. So it turns into a self-defeating feedback loop. That’s what we aren’t ready for, because our public education has been so abysmal for the last… forever.
But if you can analyze the content and reject the nonsense, then you didn’t need it in the first place, because you already knew enough about the topic.
And when you’re using it for things you don’t know enough about, that’s where you can’t tell the nonsense! You will say to yourself, because you noticed nonsense before, that “you can tell”, but you won’t actually be able to, because you’re going from known-unknown into unknown-unknown territory. You won’t even notice the nonsense because you don’t know what nonsense could even be there.
Large language models are just that, they generate some language without sense behind it, if you use it for anything at all that requires reasoning, then you’re using it wrong.
The literally only thing LLMs are good for is shit like “please reword this like that”, “please write an ad text praising these and these features of a product”, stuff that is about language and that’s it.
I certainly have bias on their usefulness because all I’ve ever used them for was to get coding ideas when I had a thorny problem. It was good for giving me a direction of thought on a function or process that I hadn’t considered, but there was so much garbage in the actual code I would never use it. It just pointed me in the right direction to go write my own. So it’s not that I ‘needed’ it, but it did on a few occasions save me some time when I was working on a difficult programming issue. Certainly not earth shattering, but it has been useful a few times for me in that regard.
I don’t even like to talk very much about the fact that I found it slightly useful at work once in a while, because I’m an anti-LLM person, at least in the way they are being promoted. I’m very unhappy with the blind trust so many people and companies put in them, and I think it’s causing real harm.
Using ChatGPT doesn’t prove you’re computer literate, it’s pretty much the opposite.
But I sure do love to let it write my mails to people who I don’t care for, but who I don’t want to anger by sending my default 3 word replies.
An adult would find a better way to handle that, but you do you I guess.
As an older techy I’m with you on this, having seen this ridiculous fight so many times.
Whenever a new tech comes out that gets big attention you have the Tech Companies saying everyone has to over it in Overhype.
And you have the proud luddites who talk like everyone else is dumb and they’re the only ones capable of seeing the downsides of tech
“Buy an iPhone, it’ll Change your life!”
“Why do I need to do anything except phone people and the battery only lasts one day! It’ll never catch on”
“Buy a Satnav, it’ll get you anywhere!”
“That satnav drove a woman into a lake!”
“Our AI is smart enough to run the world!”
“This is just a way to steal my words like that guy who invented cameras to steal people’s souls!”
🫤
Tech was never meant to do your thinking for you. It’s a tool. Learn how to use it or don’t, but if you use tools right, 10,000 years of human history says that’s helpful.
“Buy a Satnav, it’ll get you anywhere!”
“That satnav drove a woman into a lake!”
The lesson here is that you should use it as a tool but not trust everything it says blindly.
Guess what people do with ChatGPT? They ask it a question, then trust what it answers blindly.
but if you use tools right, 10,000 years of human history says that’s helpful.
Not every tool is useful. LLMs are the Tobacco Enemas of the modern world. They’re not very useful, but fast-talking salesman keep conning people into thinking they are.
I totally agree, they’re a tool, use em for what they’re useful for and don’t believe the hype.
As a glorified grammer checker, for example, they’re useful.
If I feed it as essay and ask for examples of where it cites a particular theory, that’s useful.
grammer checker
Nice one.
The thing is, some “tech” is just fucking dumb, and should have never been done. Here are just a few small examples:
“Get connected to the city gas factory, you can have gaslamps indoors and never have to bother with oil again!”
“Lets bulldoze those houses to let people drive through the middle of our city”
“In the future we’ll all have vacuum tubes in our homes to send and deliver mail”
“Airships are the future of transatlantic travel”
“Blockchain will revolutionize everything!”
“People can use our rockets to travel across the ocean”
“Roads are a great place to put solar panels” “LLMs are a great way of making things”There are two kinds of scientific progress: the methodical experimentation and categorization which gradually extend the boundaries of knowledge, and the revolutionary leap of genius which redefines and transcends those boundaries.
Acknowledging our debt to the former, we yearn nonetheless for the latter.
-- Academician Prokhor Zakharov,
Always upvote Alpha Centauri!
EDIT: and in slightly more content-related answer: I picked those examples because there’s a range of reason why these things were stupid. Some turned out to be stupid afterwards, like building highly polluting gasworks in the middle of cities or airships. Some turned were always stupid even in their very principles, like using rockets for airtravel, solarpanel roads or blockchain.
LLMs are definitely in the latter category. Like solar roadways, blockchains or commute-by-rocket, the “solution” just doesn’t have problem or a market.
I agree. People are often dumb, especially the smart ones.
When you go through life seeing the world differently it’s easy to assume that other people just don’t get it, that they’re the problem as always, when they say your invention is useless, misguided, inappropriate or harmful.
No matter how smart these people are, reality always catches up in the end, hopefully with as few casualties as possible.
Not all tools are worthy of the way they are being used. Would you use a hammer that had a 15% chance of smashing you in the face when you swung it at a nail? That’s the problem a lot of us see with LLMs.
No, but I do use hammers despite the risks.
Because I’m aware of the risks and so I use hammers safely, despite the occasional bruised thumb.
You missed my point. The hammers you’re using aren’t ‘wrong’, i.e. smacking you in the face 15% of the time.
Said another way, if other tools were as unreliable as ChatGPT, nobody would use them.
You’ve missed my point.
ChatGPT can be wrong but it can’t hurt you unless you assume it’s always right
And assuming it’s always right is what the general public is doing.
Like the lady who drives into the lake because sat nav told her to.
deleted by creator
Hammers are unreliable.
You can hit your thumb if you use the tool wrong, and it can break, doing damage, if e.g. it is not stored properly. When you use a hammer, you accept these risks, and can choose to take steps to mitigate them by storing it properly, taking care when using it and checking it’s not loose before using it.
In the same regard, if you use LLMs for what they’re good at, and verify their outputs, they can be useful tools.
“LLMs pointless because I can write a shopping list myself” is like saying “hammers are pointless because I can just use this plank instead”. Sure, you can do that, but there’s other scenarios where a hammer would be kinda handy.
if you use LLMs for what they’re good at, and verify their outputs
This is the part the general public is not prepared for, and why the whole house of cards falls apart.
I agree - but that’s user error, not a bad tool
Yeah, it’s a bullshit generator. You just gave a great example of how it’s good at generating bullshit.
When there’s some stupid task like having to write emails that nobody will read, ChatGPT is a good tool for the job. Of course, you shouldn’t have to write those emails in the first place, but as long as you’re stuck in that situation you might as well offload it to an LLM.
Sounds like it’s a tool for wasting time.
I used the image generation of a jail broken model locally to drum up an AI mock-up of work I then paid a professional to do
This was 10000x smoother than the last time I tried this, where I irritated the artist with how much they failed to understand what I meant. The AI didn’t care, I was able to get something decently close to what I had in my head, and a professional took that and made something great with it
Is that a better example?
Yes. AI is great at creating mediocre slop to pour onto a giant mountain of mediocre slop that already exists online. In fact, that’s an LLM’s greatest power: Producing stuff that looks like other stuff.
This is the perfect usecase for it. Mockups, sketches, filler. Low-quality, low-effort stuff used only as an input for more work.
Yeah, actually, that’s a productive use of a bullshit generator.
Meaning, it didn’t generate bullshit that time.
Almost like technology in the hands of someone skilled works wonders, right?
It did create some bullshit. A skilled professional had to actually compose the final product.
That’s the thing. It’s a tool like any other. People who just give it a 5 word prompt and then use the raw output are doing it wrong.
It takes a lot of skill and knowledge to recognise a wrong answer that is phrased like a correct answer. Humans are absolutely terrible at this skill, it’s why con artists are so succesful.
And that skill and knowledge is not formed by using LLMs
Absolutely.
And you can’t learn to build a fence by looking at a hammer.
My point all over really. Tools and skills develop together and need to be seen in context.
People, whether for or against, who describe AI or other tool in isolation, who ignore detail and nuance, are not helpful or informative.
But you have the tech literacy to know that. Most non-tech people that use it do not, and just blindly trust it, because the world is not used to the concept that the computer is deceiving them.
You mean like that women who drove into a lake because her satnav told her?
Maybe we should ban satnavs then! Too dangerous
I like to take photos of plants and get it to tell me what that plant is, is it a good houseplant, can I propogate it in water, and what does this symptoms on the leaves mean, and it’s really good at it.
I had to tell a potential employer recently that I won’t ever use generative AI.
apparently gugle already uses it for 25% of their coding
Lol, if you’re talking about Alphabet it will probably cost them more from paying out bounties to people who discover bugs and insecurities.
but at least one middle manager got promoted, so it’s all good
Spent this morning reading a thread where someone was following chatGPT instructions to install “Linux” and couldn’t understand why it was failing.
Hmm, I find chatGPT is pretty decent at very basic techsupport asked with the correct jargon. Like “How do I add a custom string to cell formatting in excel”.
It absolutely sucks for anything specific, or asked with the wrong jargon.
Good for you buddy.
Edit: sorry that was harsh. I’m just dealing with “every comment is a contrarian comment” day.
Sure, GPT is good at basic search functionality for obvious things, but why choose that when there are infinitely better and more reliable sources of information?
There’s a false sense of security couple to a notion of “asking” an entity.
Why not engage in a community that can support answers? I’ve found the Linux community (in general) to be really supportive and asking questions is one way of becoming part of that community.
The forums of the older internet were great at this… Creating community out of commonality. Plus, they were largely self correcting I’m a way in which LLMs are not.
So not only are folk being fed gibberish, it is robbing them of the potential to connect with similar humans.
And sure, it works for some cases, but they seem to be suboptimal, infrequent or very basic.
Oh, I fully agree with you. One of the main things about asking super basic things is that when it inevitably gets them wrong, as least you won’t waste that much time. And it’s inherently parasitical, basic questions are mostly right with LLMs because thousands of people have answered the basic questions thousands of times.
Like, which distro and version?
Oh hey it’s me! I like using my brain, I like using my own words, I can’t imagine wanting to outsource that stuff to a machine.
Meanwhile, I have a friend who’s skeptical about the practical uses of LLMs, but who insists that they’re “good for porn.” I can’t help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don’t even like it being a default on search engines, so the idea of using it just to regularly masturbate feels … extremely selfish. I can see trying it as a novelty, but for a regular occurence? It’s an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.
Using it for porn sounds funny to me given the whole concept of “rule 34” being pretty ubiquitous. If it exists, there’s porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That’s all I can think of.
He says he uses it to do sexual roleplay chats, treats it kinda like a make-your-own-adventure porn story. I don’t know if he’s used it for images.
If he’s using an online model, I hope he used a privacy-respecting VPN, a hardened browser, and didn’t sign up using his email, or else his IP address and identity are now linked to all those chats, and that info could be exposed, traded, or sold to anyone.
Now imagine growing up where using your own words is less effective than having AI speak for you. Would you have not used AI as a kid when it worked better than your own words?
Wdym “using your own words is less effective than having AI speak for you”? Learning how to express yourself and communicate with others is a crucial life skill, and if a kid struggles with that then they should receive the properly education and support to learn, not given an AI and told to just use that instead
It is, and they should, but that doesn’t mean they will. GenZ and GenA has notable communication and social issues rooted in the technologies of today. Those issue aren’t stopping our use of social media, smart phones or tablets or stopping tech companies from doubling down on the technologies that cause the issues. I have no faith they will protect future children when they have refuse to protect present children.
What I mean is that much like parents who already put a tablet or TV in front of their kid to keep them occupied, parents will do the same with AI. When a kid is talking to an AI every day, they will learn to communicate their wants and needs to the AI. But AI has infinite patients, is always available, never makes their kid feel bad and can effectively infer and accurately assume the intent of a child from pattern recognizing communication that parents may struggle to understand. Every child would effectively develop a unique language for use with their AI co-parent that really only the AI understands.
This will happen naturally simply by exposure to AI that parents seem more than willing to allow as easily as tablets and smart phones and tv. Like siblings where one kid understands the other better that parent and translates those needs to the parent. Children raised on AI may end up communication to their caretakers better through the AI, just like the sibling, but worse. Their communication skills with people will suffer because more of their needs are getting met by communicating with AI. They practice communication with AI at the expense of communicating with people.
I don’t know how to feel about this. I need to ask ChatGPT.
AI is here to stay. Anyone who refuses to learn how to use it to benefit their lives will be hurting their future. I’ve used a dozen or so AI tools and use a couple regularly and the efficacy of just chatGPT is clear. There is no going back, AI is your future whether you want it or not. AI will become your user interface for consumer electronics similarly to how consumer electronics seem to all require smart phone apps these days. Your smart phone is now the intermediary, using whatever AI the hardware manufacturers allow, such as Apple and Google using their own LLM AIs.
This entire argument is predicated on the assumption that it is a benefit to my life.
What if I believe that it’s not? That it is an active detriment? That I can live my life better without it?
And this is not contempt prior to investigation. I’ve tried it, and I honestly believe that I can do things better without it.
You know people who connect their fridge to the internet, and their front door locks to the internet, and their central heating system to the internet?
What benefit does that give me? All it does is allow – or potentially allow – someone to hack into my fridge, my central heating and my front door.
Why would I do that? I mean – that would be ridiculous. I have a front door lock that’s an actual lock because it is almost certainly going to be more secure.
I can write my answers, my emails, my letters better than AI can. I can write proposals at work better than AI can.
I can manage my life better than AI can because based on everything I have seen there is nothing it can do that is anywhere near as competent as I am.
I’m 100% with you on this. There isn’t a single thing that generative AI can do better than a traditional method or by myself.
AI code is pretty much useless, as you spend 2-4x the time debugging and fixing the code as you would have writing it from the ground up.
AI Search is useless because it regularly and predictably gives bad and/or incorrect results. A well built traditional search engine is so much better, but have disappeared with Microsoft and Google going all in on AI search.
AI art isn’t art, and I would never support anyone who uses it, let alone makes money from it. It fundamentally is missing three of the core pillars of art, which are creativity, uniqueness and the human experience.
AI chat bots are ruining human connection and consistently perform worse than human support reps.
A good horse rider was once better than an automobile for traveling on the dirt roads that existed. I have avoided just about every novel and ridiculously useless tech trend for 20 years, but I do not believe this is the same. This is a foundational change on par with the internet or the smart phone. If you can’t find a single use for AI in your life, then you will be left behind while others make significant improvements to theirs. More likely however, it we be unavoidable in the next decade as AI slowly becomes the user interface prefered by companies, which is already happening in customer service. Having used AI and LLM regularly for the last 3-4 months, there is no going back. You can choose to live in the past for as long as you able but your dependency on how you do things today will impede your ability to function in a future that makes those processes obsolete, especially as future generations grow up with AI from birth.
AI can be useful for certain things, I just think the majority of people are using it for shit that’s not actually making their life better. For example, students using it to write essays, summarize paragraphs for notes, etc. It makes their short-term work easier, but it doesn’t actually help them learn. Yeah, taking notes is annoying, but being able to read/hear something and then put it in your own words helps develop critical thinking and teaches you to synthesize information. I get companies having AI chatbots for answering simple questions that direct you to a real person if your question is too complicated or specific. But LLMs aren’t search engines and shit like a lot of people use them as
If a process that gets actionable results doesn’t require those skills, we will no longer develop them. As bad as it is for us, most of the reason we have education at all is because the business class needed educated workers. As soon as they don’t, support for education will collapse from the business side and with it, we all become American red states. If a student can get through their education, producing good enough answers with AI, why do they need to ever not use AI? If I can get an answer with a calculator I’ll always have access to, I simply exchange a mental math process with a calculator use process. If using AI is faster, with lower error rate, and can do more complex maths, we won’t need those mental math skills anymore. It would be a waste of time to learn them rather than learning AI related skills.
AI is going to upend things across society and we won’t be the ones deciding if it happens or what sacrifices were forced to make.
Critical thinking and communication are crucial life skills that people need to develop regardless of what job they do. Even if technology becomes so advanced that we have computer chips in our brains so that we can constantly search the web, we’ll still need critical thinking and communication skills. One major way we develop those skills is by going to school and doing things like math homework, taking notes, and putting the things we learned into coherent essays. We might learn more effective ways to learn these skills in the future, but that will still require the student to do the work themselves. Just bc they can pass their classes using AI, doesn’t mean they’re actually achieving the purpose of the class. Literally part of the reason why there are kids thinking that LLMs are search engines that will answer their questions with facts 100% of the time is bc those kids lack the critical thinking and reading comprehension skills to properly understand what LLMs are and what they should actually be used for. If we were to say, “actually it’s fine for kids to just use AI to spit out the right answers on their homework bc they’ll always have an AI on their phone in their pocket. Plus who needs critical thinking” then we’re leaving those kids vulnerable to manipulation by those who control what the AI’s say, just like how the lack of proper education already present in the USA leaves kids (who then grow into adults) vulnerable to propaganda
Inform disagree about the benefits of those skills, I just question whether we’ll still effectively produce adults who have them. People are lazy and they’ll take a good enough solution through AI than a better solution through their own effort, children are particularly prone to this. On the other side we have billion dollar companies that would love nothing more than a population completely dependent on their devices to survive, whose AI divisions are mostly unregulated and and whoa re currently collusion in a dictatorial overthrow of Americans democracy, so I don’t think they give a shit if our kid’s lives end up fucked up from a lack of critical thinking. They aren’t held accountable for anything their technologies do to us.
you’re not going to get anywhere with these people.
i’m fairly certain most people are much too threatened on a fundamental level by these technologies to be rational about it. we can sit here throwing data and studies at them if we want, showing they are objectively wrong but it won’t do anything effectual.
the way i see people like this discussing the technology reminds me a lot of schoolyard behavior. the feelings it inspires in them are too much to discretely express so we get obviously incorrect quips and jabs instead of thoughtful discussion, to the roar of the crowd
I hear ya, but I can’t stop. I believe this change is significant and I don’t want to see them blindsided by their inability to see it today. One day imt he not so distant future, they won’t be able to avoid it and better that they are armed with some information for the day they can no longer avoid it.
If you think I am threatened by technology, you are barking up the wrong tree.
And if you think I don’t understand what AI is and what the flaws in it are, you are also barking up the wrong tree.
If you think I am some 60 year old man shouting at clouds who just wants to live in a cave with a firepit then you are barking at an entire forest.
so, i don’t necessarily disagree that a lot of AI shit on the market rn is useless, trite bullshit but then again so was almost every tech product between 2000 and now. some people preferred to live their lives like they did before the digital revolution. you don’t really see people claiming the internet is useless anymore, tho, do you?
sure, you believe you can do things better without it. and that might be true. unfortunately, some others believe (correctly) that they can handle a larger cognitive workload using these tools, which is their purpose. regardless of your opinion on AI, anyone well educated enough in the actual industry knows that there is an additive, non-zero nootropic benefit that can be achieved. we would say the same thing about giving someone access to Google on a school test, of course they perform better! except with AI i think there is a lot of emotionally driven thinking causing people to not come to the obvious conclusions here. just because some people can figure out how to make use of these tools in a beneficial manner and you cannot doesn’t mean the tools themselves are bad.
the anti-AI horde always likes to harp about “b-b-b but my 6 fingers” and “it only can write in corpo-speak,” amongst other things. truthfully speaking, the sheer volume of work an AI is capable of doing vastly outweighs the fact that it makes mistakes in negligible proportions. i see these techs derided as “averaging-machines,” people with a straight face seriously saying this as if something that does average on virtually every cognitive task at all times isn’t already handily outcompeting its human counterparts. sitting here performatively acting does nothing to counter the fact that the most significant minds in this field of research can all at least agree that this won’t remain the status quo for long. these technologies are in a position to vastly outpace any human being’s individual economic output, like it or not.
you are in direct competition with these individuals and technology. i, honestly, hope you understand the “pro-AI” sentiment being directed at you is less a commentary on your choice surrounding the matter and more a warning that in the future you are going to be handily outcompeted by those who do choose to use these tools and exploit them to their full benefit. it’s easy to toss stones from the comfort of the present, but, when you’ve been jobless for 5 years because no one hires the “old” kind of worker maybe you will reconsider at least keeping up with the times. i don’t mean that as scorn, truthfully. it’s a fair warning.
100% agree. I wish people weren’t so dismissive because I don’t want to see them hurt because of their failure to see the future in the present.
show me a so-called ai that doesn’t fuck up all the goddamn time and maybe I’ll use it for something simple. except it fails the simplest things all the time. does it so much that they have cutesy names for it. it’s not libel, it’s hallucination. it’s not murder, it’s mortality manifestation. fuck ai. get back to making tools that actually work.
why do i have a feeling if i asked you to tell me what hallucinations are in a technical sense i would get a regurgitated answer from google?
being blind to the obvious doesn’t help anyone, man. anyone who has genuinely worked on or even just with these tools knows that they are capable of producing quality outputs. sometimes they mess up, sure, but it also can work 1000000x faster than you can. the energy problem in turn is a valid discussion but this is just being oblivious to the obvious.
why do you guys all mistake the climate of early tech adoption as an indicator of the technology itself being bad? were you not alive for the rise of the internet or something? i think you guys all just hate corporatism, not AI, but for some reason can’t take the logical step to that conclusion.
I don’t know why you have that feeling because you definitely wouldn’t get a regurgitated answer from google, since I don’t give a shit what it is in a technical sense. guess what, if I buy a phone that might catch fire every once in a while, I don’t need to know how or why it does that in a technical sense to confidently say that it is shit and not worth my money or time.
“sometimes they mess up” is not good enough, and no, the output is not “quality”.
i was alive for the rise of the internet and the analogy doesn’t work. llms are fundamentally useless for 90% of what they’re currently being used for, which is mostly generic assistance. assistance needs knowledge and actual skills not a glorified autocomplete for everything.
That’s entirely on you for using it for what its bad at and then claiming its bad at everything. I use it an LLM literally every day for work and it’s a time saver. I had to learn what its good for and what it’s not though. I also use the better available versions, not the publically available ones. Asking it questions about vague and subjective things isn’t where its best. Asking it to make an excel formula that does a thing without needing to even know a function exists to do that? Priceless.
using things for what they’re advertised for is not “entirely on me”. you can’t sell me a phone and blame me when I say it’s shit because it makes a great doorstop.
Yes it is. The creators don’t fully know what their own products are capable of or how best to use them. If you are dependent on others to tell you how to use this new tool, you will be behind the curve.
lol I don’t consider being able to construct my own sentences as behind the curve but hey.
But you can’t construct sentences that produce value from AI while others can.
Fuck that, anything “AI” worth anything is just algorithms we already had that were rebranded to take advantage of stupid people. My life is going just fine without its nonsense, thanks.
Fuck that, anything “AI” worth anything is just algorithms we already had that were rebranded to take advantage of stupid people.
While what you describe does happen (and are the worst of the worst examples of shitty unnecessary bullshit) LLMs are not algorithms we already had.
Things like ChatGPT/Copilot are novel tech. You might not like them, and they can hallucinate answers, but it is new.
My life is going just fine without its nonsense, thanks.
The theory is that you will be left behind, not that your life is missing anything.
Picture the native Americans before colonialism. Their lives were going just fine, but then a money addicted hyper “efficient” type of culture appeared and they weren’t able to raise armies and build weapons at the rate necessary to keep their way of life.
If you + LLM can do your job more efficiently than you alone then by supply/demand your value as an employee is going down by refusing to adapt, and your salary will reflect your comparatively lower output than your peers.
“worth anything” were the words I used. I didn’t say LLMs weren’t new, but right now they’re just untrustworthy and people keep using them as huge crutches so they don’t need to actually learn how to do the most basic elements of their jobs. As far as doing my job goes, I produce better quality work at a good pace, which I know better because I actually did the work, because I bother to learn new skills and really understand how it all works.
Also what a wild example to use colonized native Americans with the US and all its failures in quality of life(the pretty propaganda does not make the senseless poverty go away), education, and human rights. “Look, AI is like a giant shit hole and you’re just not keeping up!”
Buddy, if I die in however many decades being able to say I was an active, if sometimes less efficient, participant in my own life and projects then I’ll be dying happy.
“worth anything” were the words I used. I didn’t say LLMs weren’t new, but right now they’re just untrustworthy and people keep using them as huge crutches so they don’t need to actually learn how to do the most basic elements of their jobs
It’s a tool. I can respect that to you the tool doesn’t seem helpful, but there are many people who are skilled at their jobs but also have to write a lot of boilerplate maybe for unit testing, maybe for writing REST endpoints, but there will be a task where the LLM outpaces you and you just refuse to use it to find out. There’s a for what and when to use it, and in those situations you unfortunately are already outpaced.
You’re certainly right it shouldn’t be used as a crutch for every type of work, but you’re wrong that not ever using it is more efficient than using it contextually.
You will be left behind. Laughing at juniors who over rely in it is putting your guard down. Juniors become seniors with time and experience.
Also what a wild example to use colonized native Americans with the US and all its failures in quality of life(the pretty propaganda does not make the senseless poverty go away), education, and human rights. “Look, AI is like a giant shit hole and you’re just not keeping up!”
Why’s that wild? I chose it for that exact reason.
AI means that you and I have to be more efficient or we will be left behind.
Being more productive doesn’t benefit you or me in any way, except not losing our jobs. Our bosses are just sucking more money out of us.
But AI has landed and is colonizing us. Plugging your ears and refusing to engage with it isn’t a historically successful response.
If you don’t want to use AI going forward, then we need to organize to ban it. We can’t individually just insist “I’m more productive without it!” because expertise is difficult for non experts doing the hiring to sus out, but productivity is easy to track via metric.
It’s a shit world out here, why do you think I’m disagreeing?
No, it very much isn’t. AI is a mass data analysis tool with vastly greater capabilities than any human counterpart.
And then it hallucinates. What part of that reads as “AI” to you anyway? It’s just searching through text and photos to pull stuff that seems to be related based on search parameters which is something search engines have been doing for years without being called AI.
Look, I’m not at all saying we’ve reached a ceiling yet but calling any of this “AI” right now is just incorrect. Even the LLMs are spending too much time telling you what you want to hear because they don’t actually know what’s happening.
I use ChatGPT mainly for recipes, because I’m bad at that. And it works great, I can tell it “I have this and this and this in my fridge and that and that in my pantry, what can I make?” and it will give me a recipe that I never would have come up with. And it’s always been good stuff.
And I do learn from it. People say you can’t learn from using AI, but I’ve gotten better at cooking thanks to ChatGPT. Just a while ago I learned about deglazing.
You should try this thing, its pretty neat, just press maya or miles. Though it requires a microphone so you may have to open it on your phone.
https://www.sesame.com/research/crossing_the_uncanny_valley_of_voice#demo
I’ve tried a few GenAI things, and didn’t find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.
If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it’s going to be bad because it is not intelligent. It does not think. And it’s not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.
It’s incredibly effective for task assistance, especially with information that is logical and consistent, like maths, programming languages and hard science. What this means is that you no longer need to learn Excel formulas or programming. You tell it what you want it to do and it spits out the answer 90% of the time. If you don’t see the efficacy of AI, then you’re likely not using it for what it’s currently good at.
Developer here
Had to spend 3 weeks fixing a tiny app that a vibe coder built with AI. It required rewriting significant portions of the app from the ground up because AI code is nearly unusable at scale. Debugging is 10x harder, code is undocumented and there is no institutional knowledge of how an internal system works.
AI code can maybe be ok for a bootstrap single programmer project, but is pretty much useless for real enterprise level development
It’s definitely not good for whole programs in one go or complex programming. Businesses hoping to replace coders isn’t really happening. But for bite sized code sections like a simple function or non-coders who need something that does a bespoke task in their life? It seems pretty effective. I don’t know a programming language but decided to try and automate my trading strategies and in a month I’d written a program in Python that automatically trades my opening strategy. I would never have been able to do that without chatGPT. It has effectively reduced the time it takes to have functional code significantly, especially as I need to use APIs which AI has been phenomenal at providing without needing to dig through the documentation.
It isn’t replacing engineers but it definitely helps save time and can empower non engineers to make useful programs without needing years of schooling.