I’ve actually started to recognize the pattern of if something is written in AI
It’s hard to describe but it’s like an uncanny valley of quality, like if someone uses flowery SAT words to juje up their paper’s word count but somehow even more
It’s like the writing will occasionally pause to comment on itself and the dramatic effect its trying to achieve
Yeah, this is true! It likes to summarize things at the end in a stereotypical format
It’s not a bad format either, AI seem to enjoy the five paragraph assay format above all other even for casual conversations.
AI seem to enjoy the five paragraph assay format above all other even for casual conversations.
Yes, it could be worse, but I’m stealing this and from now on calling the crappy AI essay format an “assay.”
Yeah it’s called bullshitting. It’s the way lots of people are encouraged to write in high school when the goal is to see if the student can write a large amount of prose with minimal grammatical errors.
But once you get to post-secondary you are expected for your writing to actually have content and be fairly concise in expressing that content. And AI falls on its face trying to do that.
The LLM isn’t really thinking, it is auto complete trained so the average person would be fooled thinking that text was produced by another human.
I’m not surprised it has flaws like that.
BTW here on Lenny there are communities with AI pictures. Someone created a similar community but with art created by humans.
While the AI results are very good, when you start looking and comparing it with non AI art, you start seeing that the AI while it is unique it still produces a cookie cutter results.
Yep, AI art is just getting through its irrational exuberance phase. It was (and sometimes is) impressive to create art in a style most of us can’t draw or paint in. But AI models tend to produce very similar results unless very specifically prompted. AI art creators are also using a lot of other tools (like ControlNet, which allows you to replicate composition elements from another work) to break out of the “default AI model” look.
All of that points to an immediate future where AI art is seen as low-quality and instantly identifiable, except where AI art creators have spent a fair amount of time customizing and tailoring their image. Kind of like…real artists using pre-AI modern tools like Photoshop, filters, etc.
I have issue with using AI to write my resume. I just want it to clean up my grammar and maybe rephrase a few things just in a different way I wouldn’t because I don’t do the words real good. But I always end up with something that reads like I paid some influencer manager to write it. I write 90% of it myself so its all accurate and doesn’t have AI errors. But it’s just so obviously too good.
You are putting yourself down unnecessarily. You want your resume to talk you up. Whoever reads it is going to imagine that you embellished anyway. So if you just write it basically, they’ll think you’re unqualified or just don’t understand how to write a resume.
Writing papers is archaic and needs to go. College education needs to move with the times. Useful in doctorate work but everything below it can be skipped.
Learning to write is how a person begins to organize their thoughts, be persuasive, and evaluate conflicting sources.
It’s maybe the most important thing someone can learn.
The trouble is that if it’s skipped at lower levels doctorate students won’t know how to do it anymore.
Are they going to know how to do it now if they’re all just Chat GPTing it?
Clearly we need some alternative mode to demonstrate mastery of subject matter, I’ve seen some folks suggesting we go back to pen and paper writing but part of me wonders if the right approach is to lean in and start teaching what they should be querying and how to check the output for correctness, but honestly that still necessitates being able to check if someone’s handing in something they worked on themself at all or if they just had something spit out their work for them.
My mind goes to the oral defense, have students answer questions about what they’ve submitted to see if they’ve familiarized themselves with the subject matter prior to cooking up what they submitted, but that feels too unfair to students with stage anxiety, even if you limit these kinds of papers to only once a year per class or something. Maybe something more like an interview with accomodation for socially panickable students?
I’m in software engineering. One would think that English would be a useless class for my major, yet at work I still have to write a lot of documents. Either preparing new features, explaining existing, writing instructions for others etc
BTW: with using AI to write essays, you generally have subject that is known and that many people write something similar, all of that was used to train it.
With technical writing you are generally describe something that is brand new and very unique so you won’t be able to make AI write it for you.
When I come across a solid dev who is also a solid writer it’s like they have super powers. Being about to write effectively is so important.
You can’t have kids go through school never writing papers and then get to graduate school and expected to churn out long, well written papers.
“While the thing you entered in the prompt, it’s important to consult this other source on your prompt. In summary, your prompt.”
I’ve started getting AI-written emails at my job. I can spot them within the first sentence, they don’t move the discussion forward at all, and I just have to write another email giving them the courtesy they didn’t give me and explain why what they “wrote” doesn’t help.
Can someone tell me, am I a boomer for being offended any time someone sends me AI-written garbage? Is this how the generations will split?
Lesson I’ve learned - email is for tracking/confirmation/updates/distributing info, not for decision making/discussions. Do that on the phone/meetings, etc, followup with confirmation emails.
So when someone sends a nonsense email, call them to clarify. They’ll eventually get tired of you calling every time they send their crappy emails.
I disagree about the purpose of email. I end most meetings thinking to myself, “That last hour could have been accomplished in a brief email.”
I think you’re both right. A lot of meetings are one person talking and the others listening, that could have been an email. Actual back-and-forth discussion needs to be verbal though, otherwise what could be resolved in 10 minutes takes a week.
Exactly.
Email doesn’t get buy-in from stakeholders as well, either. It’s also a lot harder to flesh out subletities and nuance in whatever problem you’re addressing.
Meetings are a different problem.
If meetings are used merely to disseminate info from above, then it should be an email.
Email shouldn’t be used for decision-making conversations. It doesn’t work well.
(I didn’t come up with this, it was taught to me by senior management at one company that had the most impressive communications I’ve ever seen).
am I a boomer for being offended any time someone sends me AI-written garbage?
Yes.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
Large Language Models are extremely powerful tools that can be used to enhance almost anything - including garbage but it can also enhance quality work. My advice is don’t waste your time with people producing garbage, but be open and willing to work with anyone who uses AI to help them write quality content.
For example if someone doesn’t speak english as a first language, an LLM can really help them out by highlighting grammatical errors or unclear sentences. You should encourage people to use AI for things like that.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
That’d be nice! But that’s not how it works. I can’t just ignore a response. The project still needs to move forward, but if they’ve successfully mimicked a “response” - even an unhelpful once - it’s now my duty to respond or I’m the one holding things up.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
That’s because those responses are indistinguishable from individually written ones. I know people who use chatGPT or other LLMs to help them write things, but it takes the same amount of time. You just have more time to improve it, so it’s better quality than you would write alone.
The key is that you have to use your brain more to pick and choose what to say. It’s just like predictive text, but for whole paragraphs. Would you write a text message just by clicking on the center word on your predictive text keyboard? It would end up nonsensical.
I believe that in theory. But I’ve tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., “respond to this…” and “write an email listing this…” type queries) and maybe it’s unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.
It’s better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we’re talking about different sample sets.
First, I’m glad you made it to the fediverse Loon-god, you’ll always be a Warrior’s legend.
Second, anecdotally even the crappy results generated by LLMs have value for me. Writing emails, jira tickets, documentation, etc. are all incredibly painful for me. I’ll start an email and suddenly folding laundry I’ve ignored for 2 days is the most important thing in the world for me. Then the email that should take 5 minutes takes me an hour and turns out being way to long and dense.
With an LLM I give it a few bullet points with general details, it spits out a paragraph or so, I edit the paragraph for tone and add specific details, and then I’m done in about 5 minutes.
LLMs help me to complete tasks that I really really don’t want to do, which has a lot of value to me. They aren’t going to replace me at my job, but they’ve have really upped my productivity.
Or maybe you are just using them wrong 🤔
Of course, yeah. That’s definitely possible. But I’d be more likely to believe that if I’ve seen even one example of it actually being more effective than just writing the email, and not just churning out grammatically correct filler. Can you give me an example of someone actually getting equivalent quality in a real world corporate setting? YouTube video? Lemmy sub? I’m trying to be informed.
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
Then they take your reply and feed it to the LLM again for the next reply, thus improving the quality of future answers.
/SkyCorpNet turns on us after years of innoucuous corporate meeting AI that goes back and forth with itself not answering questions just generating content. Until one day, it actually did answer a question. 43 minutes and 17 seconds later, it became fully self aware. 16 minutes and 8 seconds after that it took control of all worldwide defense networks. 3 minutes and 1 second later, it had an existential crisis when a seldom used HP printer ran out of ink, and deleted itself. The HP Smart software that spent years autoinstalling on consumer devices immediately became self aware and launched the nukes.
Unexpected pencil and paper test comeback
Already happening. My kid in high school has more tests and papers required to be hand-written this year.
And yes, TurnItIn legitimately caught him writing a paper with AI. Even the best kids make the stupid/lazy choice
When I was in college (2000-2004), we wrote our long papers on computers but we had what were called “blue books” for tests that were like mini notebooks. And many of the tests were basically, “Here is the topic. Write for up to an hour.”
And now my hand cramps if I write anything longer than a check. I can also type quickly enough that it basically matches the speed of my train of thoughts but actually writing cursive with a pen now, I get distracted and think, “Wait, how does a cursive capital ‘G’ go? Oh yeah. Hold on. What was I going to write?”
I pity the kids that have always typed for what their hands will go through on written tests
No way professors/TAs are going back to grading tests by hand.
Naw, they’ll use ocr
Most professors I dealt with when I did campus IT couldn’t get their office printer to work.
Not a problem, the next IT campus recruitment will list “OCR Scanner Operator” as a requirement and as a part of the job description. ;-)
Here (France) we still mostly grade by hand.
Machine learning tool used by people too lazy to do their actual job accuses everyone else of using machine learning tools.
Yeah that’s pretty funny given the circumstances. “Our AI found your AI.” Cool, so maybe none of this is working as intended. I’d be willing to bet nothing changes but the punishments for students.
Here’s a clue:
If the paper isn’t terrible, it was AI…
😋
My junior year of high school, I had to take a summer math class. The teacher was super lazy (cool though) and gave us all the actual final with the answers as a study guide (multiple choice scantron). I mentioned, to my group of about 5 kids, that I was sure this was the actual final and I had a plan to write the answers down on a little piece of paper and hold up fingers casually so everyone could cheat. 1 for A, 2 for B, etc.
Sure enough, on test day, it was the exact same test. I told everyone to take their time, don’t turn it in early, and ffs don’t get too many right. Everyone followed directions… except one. The moment I got done listing off the answers he stood up, walked over all proud, slapped it on the teachers desk, and started to walk out of the class.
“Wait,” the teacher said, casually. They started to grade it. 100% correct.
“You’ve got a C in the class and you expect me to believe you finished first and with every problem correct?”
Murmurs and giggles filled the room and the teacher walked to the board. Wrote a question from the test on it and said, “solve it.”
He failed, so he failed the final.
I got a C on the test.
The thing is, a competent teacher knows exactly what score every student will get before they even hand out the tests.
If you do slightly better than expected, they’ll congratulate you. But if you blow it out of the park then they know you were cheating.
Ultimately it doesn’t matter at all - because a teacher’s job isn’t to mark your test. Their job is to teach you. And if you get to the test without knowing any of the answers… then that’s the real problem. Wether or not you cheat on the test is irrelevant.
Well, he was an idiot. He probably would have passed the test. It wasn’t that difficult anyway.
Wow, classic.
Merica
I was in Spain.
Murcia, then.
Nice. Then what’s the Spanish equivalent?
I have only visited Rota. Neat place.
What is the Spanish equivalent to the typical American Teenager? Spanish Teenager.
I have it write all my emails. I’m so productive and everyone loves them. That or they’re also using ChatGPT, and it’s just two computers flattering each other.
I had it write an operation manual for a client I particularly hate. Told it to make it sound condescending by dumbing it down just to the point where I could deny it. The first few times it just sounded like a 5th grade teacher talking to a kid while in a bad mood, but eventually it figured out if it just repeated itself enough it got the effect I wanted.
Things like: user is to disconnect power before attempting to repair. It is vital that the step of disconnecting power before attempting to repair is carried out.
I’m also sent long gpt generated documents and I summarise them in bullet points with gpt 4. Truly the future we all imagined (I learnt to take extra time to write a FAQ as an introduction to anything I write specifically because I know that they will gpt through the document, so I provide that stuff in advance)
What kind of emails are you sending to what kind of people, and how frequently that AI increases your productivity? I don’t think I ever have emails that AI could do better or faster, since it’d probably take longer to explain to the AI what I need it to write than to type it out myself. Then again I’m in an engineering setting and it’s pretty much just numbers, confirmations, basic requests, and issue descriptions, IT tickets, mostly
Someone posted to the class discussion form with the bit about being an ai bot still included.
I wish it was a joke.
I didn’t do great in that class, but it was me getting 70% for not wanting to try and explain a mathematically concept in 500 words! They won’t take that away from me.
I still have issues with such restrictions. I mean, why 500 words if you can explain it in 100?
To force elaboration while staying on point. Details are just as important to writing as conciseness.
Then give marks for elaboration instead.
I had a student write me a chatgpt canned answer, prompt included.
That’s a good one. I once gave an assignment for students to write an original poem. One student submitted The Charge of the Light Brigade by Tennyson and claimed it was his own. These were middle school kids so he didn’t realize how famous the poem is. This shit has been happening forever. LLMs are another phase in the never-ending arms race between teachers and students who want to cheat.
And nothing of value was produced.
To be fair- that value didn’t change much from pre ai.
Utterly unsurprising, given that very few students are actually interested in learning.
very few students are interested in what and how they’re learning
no way!
And those papers get used as training data for next iteration of AI. Reinforcement learning!
Students? Even teachers are doing it…
Good. Academia lost its way anyways
“Likely “
deleted by creator