• @[email protected]
    link
    fedilink
    English
    2715 hours ago

    I hardly see it changed to be honest. I work in the field too and I can imagine LLMs being good at producing decent boilerplate straight out of documentation, but nothing more complex than that.

    I often use LLMs to work on my personal projects and - for example - often Claude or ChatGPT 4o spit out programs that don’t compile, use inexistent functions, are bloated etc. Possibly for languages with more training (like Python) they do better, but I can’t see it as a “radical change” and more like a well configured snippet plugin and auto complete feature.

    LLMs can’t count, can’t analyze novel problems (by definition) and provide innovative solutions…why would they radically change programming?

    • @[email protected]
      link
      fedilink
      English
      29 hours ago

      You’re missing it. Use Cursor or Windsurf. The autocomplete will help in so many tedious situations. It’s game changing.

    • @[email protected]
      link
      fedilink
      English
      914 hours ago

      ChatGPT 4o isn’t even the most advanced model, yet I have seen it do things you say it can’t. Maybe work on your prompting.

      • @[email protected]
        link
        fedilink
        English
        913 hours ago

        That is my experience, it’s generally quite decent for small and simple stuff (as I said, distillation of documentation). I use it for rust, where I am sure the training material was much smaller than other languages. It’s not a matter a prompting though, it’s not my prompt that makes it hallucinate functions that don’t exist in libraries or make it write code that doesn’t compile, it’s a feature of the technology itself.

        GPTs are statistical text generators after all, they don’t “understand” the problem.

        • @[email protected]
          link
          fedilink
          English
          149 minutes ago

          It’s also pretty young, human toddlers hallucinate and make things up. Adults too. Even experts are known to fall prey to bias and misconception.

          I don’t think we know nearly enough about the actual architecture of human intelligence to start asserting an understanding of “understanding”. I think it’s a bit foolish to claim with certainty that LLMs in a MoE framework with self-review fundamentally can’t get there. Unless you can show me, materially, how human “understanding” functions, we’re just speculating on an immature technology.