I read the abstract, and the connection to your title is a mystery. Are you using “grock” as in “transcendental understanding” or as Musk’s branded AI?
No c, just grok, originally from Stranger in a Strange Land. But a more technical definition is provided and expanded upon in the paper. Mystery easily dispelled!
We follow the classic experimental paradigm reported in Power et al. (2022) for analyzing “grokking”, a poorly understood phenomenon in which validation accuracy dramatically improves long after the train loss saturates. Unlike the previous templates, this one is more amenable to open-ended empirical analysis (e.g. what conditions grokking occurs) rather than just trying to improve performance metrics
Well-defined for casual use is very different than well-defined for scholarly research. It’s standard practice to take colloquial vocab and more narrowly define it for use within a scientific discipline. Sometimes different disciplines will narrowly define the same word two different ways, which makes interdisciplinary communication pretty funny.
AI doesn’t grok anything. It doesn’t have any capability of understanding at all. It’s a Markov chain on steroids.
Did you read the paper? Or at least have an llm explain it?
I read the abstract, and the connection to your title is a mystery. Are you using “grock” as in “transcendental understanding” or as Musk’s branded AI?
No c, just grok, originally from Stranger in a Strange Land. But a more technical definition is provided and expanded upon in the paper. Mystery easily dispelled!
In that case I refer you to u/catloaf 's post. A machine cannot grock, not at any speed.
Thanks for clarifying, now please refer to the poster’s original statement:
AI doesn’t grok anything. It doesn’t have any capability of understanding at all. It’s a Markov chain on steroids.
Oh okay so they’re just redefining words that are already well-defined so they can make fancy claims.
Well-defined for casual use is very different than well-defined for scholarly research. It’s standard practice to take colloquial vocab and more narrowly define it for use within a scientific discipline. Sometimes different disciplines will narrowly define the same word two different ways, which makes interdisciplinary communication pretty funny.
No. It’s not standard at all, especially when the goal is overtly misleading.
Maybe one or both disciplines is promoting bullshit.
deleted by creator
…is how generative-AI haters redefine terms and move the goalposts to fight their cognitive dissonance.
Imagine believing that AI-haters are the ones who redefine terms and move goalposts to fight their cognitive dissonance.