• Drew
    link
    fedilink
    English
    425 days ago

    People genuinely think LLMs will solve climate change

  • @[email protected]
    link
    fedilink
    English
    625 days ago

    It always reminds me how serious people were trying to build steam powered aircraft. I imagine they had a bunch of “if we can just get some lighter material” kind of discussions right until some bicycle guys used an internal combustion engine to make history.

  • GHiLA
    link
    fedilink
    English
    4
    edit-2
    3 days ago

    Consumer: buys next phone(and car) with the least amount of Ai possible

    Hey society!

    This is broken!

    This isn’t how capitalism works! We have to choose stuff with our wallets, not them with their investors.

    Fuck them, right?

  • @[email protected]
    link
    fedilink
    English
    265 days ago

    It’s always amazing to see how folks latch on to the extreme vs the reality.

    ML and AI tools are quite helpful. Yes they make mistakes but at the end of the day it reduces human effort. It’s really not hard to see the usefulness.

    • @[email protected]
      link
      fedilink
      English
      265 days ago

      Reduces human effort in what? Certainly for producing garbage, but it increases my human effort in having to wade through that garbage.

      • @[email protected]
        link
        fedilink
        English
        165 days ago

        The soul-crushing effort of socialising and producing art, an effort that is eating all that mental and physical energy which would be better utilised in the mines to make more profits for billionaires. /s

      • @[email protected]
        link
        fedilink
        English
        85 days ago

        It reduces effort in summarizing reports or paper abstracts that you aren’t sure you need to read. It reduces efforts in outlining formulaic types of writing such as cover letters, work emails etc.

        It reduces effort when brainstorming mundane solutions to things, often by knocking off the most obvious choices but that’s an important step in brainstorming if you’ve ever done it.

        Hell, I’ve never had chat GPT give me the wrong instructions when I ask it for a basic cooking recipe, and it also cuts out all of the preamble.

        If you haven’t found uses for them, you either aren’t trying too hard or you’re simply not in an industry/job that can use them for what they are useful for. Both of which are ok, but it’s silly to think your experience of not using them means that no one can use them for anything useful.

        • @[email protected]
          link
          fedilink
          English
          23 days ago

          Creating a lot of filler “content” is also another use for them, which is what I was getting at. While I have seen some uses for AI, it overwhelmingly seems to be used to create more work than reduce it. Endless spam was bad enough, but now that there’s an easy way to generate mass amounts of convincingly unique text, it’s a lot more to wade through. Google search, for example, used to be a lot more useful, and results that were wastes of time were easier to spot. That summaries can include inaccuracies or outright “hallucinations” makes it mostly worthless to me since I’d have to at the very least skim the original material to verify just in case anyway.

          I’ve seen AI in action in my industry (software development). I’ve seen it do the equivalent of slapping together code pieced together from Stack Overflow. It’s impressive that it can do that, but what’s less impressive are clueless developers trusting the code as-is with minimal verification/tweaks (just because it runs, doesn’t mean it’s correct or anywhere close to optimal) or the even more clueless executives who think this means they can replace developers with AI or that tasks are a simple matter of “ask the AI to do it”.

        • @Squirrelanna
          link
          English
          24 days ago

          Just because you haven’t personally gotten an egregiously wrong answer doesn’t meant it won’t give one, which means you have to check anyway. Google’s AI famously recommended adding glue to your pizza to make the cheese more stringy. Just a couple of weeks ago I got blatantly wrong information about quitting SSRIs with its source links directly contradicting it’s confidently stated conclusion. I had to spend EXTRA time researching just to make sure I wasn’t being gaslit.

          • @[email protected]
            link
            fedilink
            English
            14 days ago

            Google’s AI is famously shitty. ChatGPT, and especially the most modern version is very good.

            Also don’t use LLMs for sensitive stuff like quitting SSRIs yet.

            • @Squirrelanna
              link
              English
              12 days ago

              That’s the thing. I didn’t want to use it. The AI’s input was entirely unsolicited and luckily I knew better than to trust it obviously. I doubt the average user is going to care enough to get a second opinion.

      • @[email protected]
        link
        fedilink
        English
        34 days ago

        I’ve found it to be pretty good at transforming and/or extracting data from human input. For example, I’ve got an app that handles incoming jobs, and among the sources of those jobs is “customer sent an email”. Pretty neat to give an LLM a JSON schema and tell it to fill the details it can figure out from the email. Of course, we disclose to the user that the details were filled in by AI and should be double checked for accuracy - But it saves our customers a lot of time having the details sussed out from emails that don’t follow a specific format.

    • @[email protected]
      link
      fedilink
      English
      175 days ago

      Now, include the environmental costs of some of these tools, and whether they’re a) running at a loss or not in order to gain market share, and b) whether they’re the tools people are even using.

      Do we still come out ahead? Are the minutes saved - if there are truly any - actually saved, or just shoveled onto someone else’s plate as environmental damage?

      What’s the big picture here? Because society honestly should not give a flying fuck if your job becomes slightly easier at the cost of everybody else.

    • @[email protected]
      link
      fedilink
      English
      15
      edit-2
      5 days ago

      extreme is tech bros hyping ML and AI for not what it is to get shareholders to pay millions to projects that will likely not achieve its end goals. Anyone in the genuine ML and AI domain should be pissed because it is going to reduce interest and trust in these domains when the bubble bursts and then real researchers will be left to pick up the pieces whereas the tech bros will likely move onto the next thing.

      Things that chatGPT, gen AI etc can do now? They are already crazy wild to me. But somehow to create more hype about it, they are advertised as being one step away from AGI or one step away from flawlessly pipelining creative processes. It is neither of these yet and from what it seems throwing more data to it will likely not be it either. But of course if you come up with a plan like “we need to double our compute bro and then we will have AGI bro” then you can get investors to pay double or quadruple of what they paid before. So in summary, they are basically con men.

    • @[email protected]
      link
      fedilink
      English
      135 days ago

      The day to day reality for me at least is that the new hyped up llms are largely useless for work and in some cases actually detriments. Some people at work use them a lot, but the heavy users tend to be people who were bad at their jobs, or at least bad at the communication aspect of their jobs. They were bad at communicating before and now, with the help of chat gpt, they are still bad at communicating, except they have gotten weirdly obstinate about their crappy work output.

      Other folks I know have tried to use them to learn new things but gave up on them when they kept getting corrected by subject matter experts.

      I played around with them for code generation but did not find it any faster than just writing and debugging my own code.

    • @[email protected]
      link
      fedilink
      English
      105 days ago

      Yes, they can be useful at times. This does not mean you can just ditch all the human effort and algorithmic solutions and fill every nook and cranny with AI. Which is exactly where we’re at currently. And it’s turning out dreadful.

  • @[email protected]
    link
    fedilink
    English
    17
    edit-2
    5 days ago

    AI is shit. Poor programming results in heavy errors and intrusive break ins during benign operations. Worst is that the corpos that adopt it shove it into your systems in a way that makes it unremovable

    • @[email protected]
      link
      fedilink
      English
      84 days ago

      Poor programming?

      I’m sorry, LLMs are shit for various reasons, but “poor programming” isn’t one of them. And I bring this up because branding it as such suggests there is a “good programming” LLM that doesn’t have the inherent problems that any such system would have. Which just isn’t a thing with the way LLMs work.

  • @[email protected]
    link
    fedilink
    English
    23 days ago

    I don’t get it. Who is claiming that if we build one more LLM we will solve AGI? Maybe I just live under a rock. Top comment here is saying people believe LLMs will “solve climate change.” Who believes that? I do not know what any of this is on about, I have never seen these people.

  • Narri N.
    link
    fedilink
    English
    8
    edit-2
    5 days ago

    just one more SSRI bro, i promise bro the next SSRI will work bro please i need one more SSRI bro Edit: okay, well maybe instead of insinuating that none of the SSRIs work, I should have claimed that all of them so many potentially crippling side effects, that prescribing these “medications” should be considered significantly more as an absolute last-resort solution - together with inpatient care - than be given out like candy as they are today. But I also understand that it is the cheapest option available, as the best option is therapy, which there are multiple of, all of them requiring quite a lot of time and work, all of which cost significantly more than what anyone is ready to pay. But that shit is long, so take it as you will

    • @[email protected]
      link
      fedilink
      English
      17
      edit-2
      5 days ago

      no… that’s not how that works.

      But I also understand that it is the cheapest option available, as the best option is therapy,

      outcomes of both treatments combined are superior to outcomes from either alone. SSRIs are just a tool to help you retrain your brain more easily during the course of behavioral modifications, which a therapist typically helps you identify and implement

      they’re powerful which makes them difficult to use, i get it. finding the right medication can be exhausting, because you need to build up the drug in your body to have an effect and you need to titrate off to safely stop the drug. so it’s a long game of trial and error.

      But I can assure you that psych meds are ridiculously important for managing certain conditions.

      i really wish the fuckers who tote pharmaceutical population control conspiracies would just spend a weekend in the Before Times when people with mundane mental disorders by today’s standards were locked up and abused. yeah, totally randy, it’s lexapros fault your life is a mess, you’d be way better having a manic episode in 1700

      • Narri N.
        link
        fedilink
        English
        13 days ago

        okay yeah, I think this might be the most sensible answer here. I myself tend to get “a bit” frothing at the mouth when it comes to these things, because of personal experiences. So sorry everyone, I got carried away.