• capital
    link
    fedilink
    English
    12
    edit-2
    8 months ago

    People keep saying this but it’s just wrong.

    Maybe I haven’t tried the language you have but it’s pretty damn good at code.

    Granted, whatever it puts out needs to be tested and possibly edited but that’s the same thing we had to do with Stack Overflow answers.

    • @[email protected]
      link
      fedilink
      English
      248 months ago

      I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.

      For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.

      This is what LLMs are currently good for. They are just another tool like tab completion or code linting

    • @[email protected]
      link
      fedilink
      English
      38 months ago

      I use it all the time and it’s brilliant when you put in the basic effort to learn how to use it effectively.

      It’s allowing me and other open source devs to increase the scope and speed of our contributions, just talking through problems is invaluable. Greedy selfish people wanting to destroy things that help so many is exactly the rolling coal mentality - fuck everyone else I don’t want the world to change around me! Makes me so despondent about the future of humanity.