• 2 Posts
  • 59 Comments
Joined 2 months ago
cake
Cake day: June 4th, 2025

help-circle
  • Thanks, I almost didn’t post because it was an essay of a comment lol, glad you found it insightful

    As for Wolfram Alpha, I’m definitely not an expert but I’d guess the reason it was good at math was that it would simply translate your problem from natural language into commands that could be sent to a math engine that would do the actual calculation.

    So basically act like a language translator but for typed out math to a programming language for some advanced calculation program (like wolfram Mathematica)

    Again, this is just speculation because I’m a bit too tired to look into it rn, but it seems plausible since we had basic language translators online back then (I think…) and I’d imagine parsing written math is probably easier than natural language translation


  • Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

    I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

    Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

    This has two major preventative issues for AGI: input size limits, and determinism.

    The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

    This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

    Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

    Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

    ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

    All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

    This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

    Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

    Now there are some more exotic neural networks architectures that could surpass these limitations.

    Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

    However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

    You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

    SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

    Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

    In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

    The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

    Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.



    1. We didn’t really have a democratic choice
    2. Most of my countrymen are stupid and proud of it
    3. Most of the people who are aware of how bad it is are not willing to break the law or upset the status quo to fight it
    4. Reform and revolution take organization but that takes time and effort that most can’t afford.
    5. People who were aware of how bad things were and are getting have become exhausted and constantly feel powerless to the point they can’t find the strength to keep trying

    Those with the time/wealth/power to do anything are too blind or unsympathetic to do anything real. Those without time and resources lack the resources to do anything influential without organization which they also lack the time and resources to create as well.

    We’re already in a dictatorship. But you likely have time to stop your country from following suit. Make sure there are good guys left to beat the shit out of us in the end


    Edit: to clarify, by “we didn’t have a democratic choice” I was referring to gerrymandering, vote suppression, and other things like winner take all states. I voted and I know there were a surprising number of people who voted blue despite being lifelong republicans. It didn’t do anything because we don’t have democracy.





  • My family members have joked about being ADHD for my whole life since our conversations often go all over the place and we have focus/memory issues.

    So, not really an ah hah moment when I got diagnosed, but after getting medicated and realizing normal people can actually pay attention to and learn from lectures without needing to put in constant effort? Yeah that was a bit of an eye opening experience

    I also learned that there is a significant difference between laziness (not doing something because you don’t want to) and executive dysfunction (not doing something because your brain won’t let you even if you want to do it).

    The meds are a godsend, but I should note they don’t get rid of the ADHD they mostly just make it easier to manage.

    Oh yeah one other thing that was an ah hah moment was being able to make plans and lists and things and then just do them and remember I’ve made them. Honestly that’s probably the ability that’s changed my life most since I got medicated


  • Ha chocolate is the only thing I’d say I’m addicted to too lol. I have to force myself to take a supposedly addictive drug every morning, but the only thing I’d rather not go a day without is chocolate lol

    And yeah it’s a pretty broad spectrum. I’m a super quiet, reserved person in public, not really the type of person most think of when someone says ADHD.

    I didn’t get diagnosed until a year or two ago when I noticed that after drinking an energy drink I got all the tasks done I’d been putting off for months and was able to take a nap during the day lol


  • Ditto to the dentists and the weed. And I’ve not done any opiates that I can remember but my cousin, who is similar to me, said morphine did nothing for his broken leg but make him feel sick to his stomach so he just wouldn’t take it.

    I am ADHD. Methylphenidate (Ritalin) helped my focus but made me like physically anxious and sick to my stomach, amphetamine (adderall) typically feels like nothing though it helps with executive dysfunction and sometimes I feel cold or sleepy.

    The reason I asked is because some of the others who shared my sentiment about alcohol mentioned the same thing about weed and painkillers and had ADHD. Some of them also mentioned having red hair, but I don’t really fit that category.

    Not feeling the “high” of drugs does sound like it would be related to adhd since it would imply abnormalities in reward pathways in the brain, and from what I can tell, the weed and painkiller issues seem more like dysfunctional opioid receptors. Then again I’m not a doctor lol


  • If I drink a lot I’ll also get slightly dizzy and like feel like my vision is slightly delayed, but I’m still able to keep my balance and focus. I’ll also sometimes get sick to my stomach, but food helps with that.

    But yeah I don’t get any positive effects either. No buzz, no happiness, no reduced inhibitions.

    Also turns out you and I aren’t the only people like this. Everytime I bring it up on lemmy some other person seems to comment that they feel the same.

    Do other drugs like weed or stimulants not work too? and are you ADHD?



  • Another thing that is relevant to my opinions on this topic: I don’t get drunk.

    My watch confirms that my heart rate and breathing are affected, but apart from the burn going down (and a headache if I drink more than a few shots), I feel no other effects. So unlike most people I don’t have the same source of positive associations with the taste.

    That being said, from a strictly flavor perspective, Jin isn’t horrible. Juniper is one of the few flavors that doesn’t clash with ethanol as much as others. Grapefruit also fits with the taste of alcohol pretty well, but apart from those… I think I’d rather a capri sun than a cocktail lol


  • White claw is a drink that comes in vaguely energy-drink looking cans, tastes bad (if it has any flavor at all), and has a slight alcohol content.

    In fairness, my opinion of flavor should be taken with a grain of salt (lol) because I really dislike the taste of alcohol and am honestly surprised anyone could drink even a low percentage without recognizing ethanol’s horrible flavor.





  • I do love that Aotearoa has incredible avian diversity.

    On one hand, we have Kea parrots who are smarter than most of the human tourists they like pranking and stealing from.

    And on the other side of the spectrum we have the kakapo: literally the dumbest bird in existence.

    Such amazing biodiversity lol



  • I’ve come to the conclusion that suffering is really just anything that invades your focus without your desire for it to happen.

    Thinking about anything you would rather not think about is suffering. You get cut and your brain constantly reminds you of it because evolution is a bitch. Hatred, envy, anger, intrusive thoughts, headaches, itchy clothes, annoying noises in your environment, etc. Anything that steals your attention without your consent is suffering.

    So if you’re so focused on avoiding suffering you aren’t able to focus on doing what you want then yep, suffering.