• @[email protected]
    link
    fedilink
    English
    3214 hours ago

    Not to be that guy, but the image with all the traintracks might just be doing it’s job perfectly.

    • @[email protected]
      link
      fedilink
      611 hours ago

      It gives you the right picture when you asked for a single straight track on the prompt. Now you have to spend 10 hours debugging code and fixing hallucinations of functions that don’t exist on libraries it doesn’t even neet to import.

      • @[email protected]
        link
        fedilink
        110 hours ago

        Not a developer. I just wonder about AI hallucinations come about. Is it the ‘need’ to complete the task requested at the cost of being wrong?

        • @send_me_your_ink
          link
          18 hours ago

          Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.

          What’s marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called “tokens” to break apart natural language into elements that a machine can understand, and then recombining those tokens to “create” something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.

          So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.