Seriously, TRY and get an AI chat to give an answer without making stuff up. It is impossible. You can tell it “you made that data up, do not do that” … and it will apologize and say you were right, then make up more dumb shit.
Google Gemini gives me solid results, but I stick to strictly factual questions, nothing ambiguous. Got a couple of responses I thought were wrong, turns out I was wrong.
I got a Firefox plugin to block Gemini results because whenever I look up something for my medical studies, it runs a really high chance of spitting out garbage or outright lies when I really just wanted the google result for the NIH StatPearls article on the thing.
As a medical professional, generative AI and search adjuncts like Gemini only make my job harder.
I have found AI to be a terrible primary source. But something I’ve found very useful is to ask for a detailed response, structured a certain way. Then tell the AI to grade it as a professor would. It actually does a very good job at acknowledging gaps and giving an honest grade then.
AI shouldn’t be a primary source but it’s great for starting a topic. Similar to talking to someone that’s moderately in the know on something you interested in
That’s because ALL generative AI results, even the correct ones, are “made up”. They just exist on a spectrum of coincidental correspondence with reality. I’m still surprised that they manage to get as much right as they do.
Seriously, TRY and get an AI chat to give an answer without making stuff up. It is impossible. You can tell it “you made that data up, do not do that” … and it will apologize and say you were right, then make up more dumb shit.
Yeah, LLMs are great if you treat them like a tool to create drafts or give you ideas, rather than like an encyclopedia.
I’ll get hate for this but in most tasks people use them for they are pretty dang accurate. I’m talking about frontier models fyi
Google Gemini gives me solid results, but I stick to strictly factual questions, nothing ambiguous. Got a couple of responses I thought were wrong, turns out I was wrong.
I got a Firefox plugin to block Gemini results because whenever I look up something for my medical studies, it runs a really high chance of spitting out garbage or outright lies when I really just wanted the google result for the NIH StatPearls article on the thing.
As a medical professional, generative AI and search adjuncts like Gemini only make my job harder.
I have found AI to be a terrible primary source. But something I’ve found very useful is to ask for a detailed response, structured a certain way. Then tell the AI to grade it as a professor would. It actually does a very good job at acknowledging gaps and giving an honest grade then.
AI shouldn’t be a primary source but it’s great for starting a topic. Similar to talking to someone that’s moderately in the know on something you interested in
That’s because ALL generative AI results, even the correct ones, are “made up”. They just exist on a spectrum of coincidental correspondence with reality. I’m still surprised that they manage to get as much right as they do.
I wish people would stop treating these tools as intelligent.