• @[email protected]
    link
    fedilink
    English
    18 months ago

    Thanks, I’ll check those out. The entire point of your comment was that llm is a dead end. The branching as you call it is just more parameters which approach, in lower token models a collapse. Which is why more tokens and larger context does improve accuracy and why it does make sense to increase them. LLMs have also proven to in some cases have what you call reason and what many call reason but which is not a good word for the error. Larger models provide a way to stimulate the world which in turn gives us access to the sensing mechanism of our brain, which is to stimulate and then attend to disparages between the simulation and actual. This in turn gives access to action which unfortunately is not very well understood. Simulation, or prediction, is what our brains constantly do to be able to react and adapt to the world without massive timing failure and massive energy cost, for instance consider driving where you focus on unusual sensing and let action be an extension of purpose by just allowing constant prediction to happen where your muscles have already prepared to commit even precise movements due to enough practice with your “model” of how wheel and foot apply to the vehicle.