• @[email protected]
    link
    fedilink
    English
    1110 months ago

    Another idiot writer missing how AI works… along with every other automation and productivity increase.

    I literally automate jobs for a living.

    My job isn’t to eliminate the role of every staff member in a department, it’s to take the headcount from 40 to 20 while having the remaining 20 be able to produce the same results. I’ve successfully done this dozens of times in my careers, and generative AI is now just another tool we can use to get that number down a little bit lower or more easily than we could before.

    Will I be able to take a unit of 2 people down to 0 people? No, I’ve never seen a process where I could eliminate every human.

    • mozzOP
      link
      fedilink
      2410 months ago

      Cory Doctorow is an idiot writer? Do you know of him and you’ve reached this conclusion, or you don’t know who he is and just throwing shade?

      I am curious. How much follow-up do you do after your automations 1 year later to see how the profit and loss picture of the department has worked out after your work is done?

      (Not that that’s the point; I think you’ll get very little sympathy here for “I help the already-rich to keep more of the productive output of the world and make sure workers keep less” even if you can make an argument that you can do it effectively.)

      • @[email protected]
        link
        fedilink
        English
        1310 months ago

        I’ve been following Doctorow for decades now (BoingBoing) and yes, he’s an idiot in this situation.

        I’m still working with the organizations I started automating for more than a decade ago. I’m sitting in the office of one of them right now. It’s worked out great, nobody is complaining about the fact that this office space now has people at separated desks instead of crunched together like they were when I started. If it makes you feel any better, I almost exclusively do this for government and public organizations (I’m at a post-secondary education institution right now) though I really don’t care.

        Stopping or stalling productivity improvements is stupid, that job is effectively useless if it can be automated, it’s nothing more than make-work to keep it. We should pass laws to redistribute wealth to solve that problem, not keep them in useless jobs by preventing automation.

        • mozzOP
          link
          fedilink
          710 months ago

          You’re still working simultaneously with dozens of different organizations? Maybe I’m misunderstanding something.

          Stopping or stalling productivity improvements is stupid, that job is effectively useless if it can be automated, it’s nothing more than make-work to keep it. We should pass laws to redistribute wealth to solve that problem, not keep them in useless jobs by preventing automation.

          Like a lot of things, the devil is in the details. Almost everyone’s firsthand experience with consultants coming in and enacting “efficiency” is that it’s bad for both the employees obviously, but also bad for the business. I’m not saying that’s the impact of what you’re doing, just what most people’s experience is going to be.

          So there’s a central question in AI: Once the machines can do everything for us, does that mean everyone eats for free? Or no one eats? What would your answer to that question be?

            • lad
              link
              fedilink
              210 months ago

              That’s what I’m saying and people call me radicalised for that 😅

          • @[email protected]
            link
            fedilink
            English
            710 months ago

            No, I have worked with a dozen or so organizations, but I’ve done multiple jobs for each. I’m a freelancer.

            As for your second question, I’d like to see a basic income implemented for all citizens in my country. I’ve talked to my local politicians about it multiple times. It’s something that people now know about, which is good progress in my opinion. I don’t expect it to happen soon, but hopefully we’ll get there before we start to have too many social problems.

          • @[email protected]
            link
            fedilink
            110 months ago

            does that mean everyone eats for free? Or no one eats?

            Yes.

            Everyone eats for free… but the machines don’t need to eat, so why produce any food at all. That soy and corn getting used to feed caged animals for human consumption? Turn it into biofuel to power the machines. The hordes of hungry protestors?.. more biofuel.

    • @[email protected]
      link
      fedilink
      1410 months ago

      I sat in a room of probably 400 engineers last spring and they all laughed and jeered when the presenter asked if AI could replace them. With the right framework and dataset, ML almost certainly could replace about 2/3 of the people there; I know the work they do (I’m one of them) and the bulk of my time is spent recreating documentation using 2-3 computer programs to facilitate calculations and looking up and applying manufacturer’s data to the situation. Mine is an industry of high repeatability and the human judgement part is, at most, 10% of the job.

      Here’s the real problem. The people who will be fully automatable are those with less than 10 years experience. They’re the ones doing the day to day layout and design, and their work is monitored, guided, and checked by an experienced senior engineer to catch their mistakes. Replacing all of those people with AI will save a ton of money, right up until all of the senior engineers retire. In a system which maximizes corporate/partner profit, that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough), and yet there will still be a substantial fraction of oversight that will be needed. Unfortunately, ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight. That may not matter too much for marketing art departments, but for structural engineers it’s going to result in a safety or reliability issue for society as a whole. And since failures in my profession only occur in marginal situations (high loads - wind, snow, rain, mass gatherings) my suspicion is that it will be decades before we really find out that we’ve been whistling through the graveyard.

      • mozzOP
        link
        fedilink
        410 months ago

        Yeah. This is something that to me isn’t getting enough attention in the whole conversation. I’m trying to get myself up to speed on how to code effectively with AI tools, but I feel like understanding the code at a deep level is required in order to be able to do that effectively.

        In the future, I think the “earning” that gives you that type of knowledge won’t be something that people are forced to go through anymore, because AI can do the simple stuff for them, and so the inevitable result is that very few people will be able to do more than rely on the AI tools to either get it right or not, because they don’t understand the underlying systems. I’m honestly not sure what future is in store a couple generations from now other than most people being forced to trust the AI (whatever its capabilities or incapabilities are at that point). That doesn’t sound like a good scenario.

        • @[email protected]
          link
          fedilink
          410 months ago

          The future is already here. This will sound like some old man yelling at clouds, but the tools available for advanced structural design (automatic environmental loading, finite element modeling) are used by young engineers as magical black boxes which spit out answers. That’s little different than 30 years ago when the generation before me would complain that calculators, unlike sliderules, were so disconnected from the problem that you could put in two numbers, hit the wrong operation, and get a non-sensical answer but believe it to be correct because the calculator told you so.

          This evolution is no different, it’s just that the process of design (wither programming or structures or medical evaluation) will be further along before someone realizes that everything that’s being offered is utter shit. I’m actually excited about the prospect of AI/ML, but it still needs to be handled like a tool. Modern machinery can do amazing things faster, and with higher precision, than hand tools - but when things go sideways they can also destroy things much quicker and with far greater damage.

          • @[email protected]
            link
            fedilink
            410 months ago

            old man yelling at clouds

            My turn.

            Almost 30 years ago, in sunny Spain, a friend of mine was studying to become an Electrical Engineer. Among the things he told me would be under his responsibility, would be approving the plans for industrial buildings. “So your curriculum includes some architecture?”, I asked. “No need”, he responded, “you just put the numbers into a program and it spits out all that’s needed”.

            Fast forward to 2006, when an industrial hall in Poland, built by a Spanish company, and turned into a disco, succumbed under the weight of snow on its roof, killing 65 people.

            Wonder if someone forgot to check the “it snows in winter” option… 🙄

          • @[email protected]
            link
            fedilink
            1
            edit-2
            10 months ago

            The difference is that calculators are deterministic and correct. If you get a wrong answer, it is you that made the mistake.

            LLMs will frequently output nonsense answers. If you get a wrong answer, it is probably the machine that made the mistake.

      • @[email protected]
        link
        fedilink
        110 months ago

        that will come at the expense of training the future senior engineers until, at some point, there won’t be any (/enough)

        Anything a human can be trained to do, a neural network can be trained to do.

        Yes, there will be a lack of trained humans for those positions… but spinning up enough “senior engineers” will be as easy as moving a slider on a cloud computing interface… or remote API… done by whichever NN comes to replace the people from HR.

        ML is based on human learning and replacing the “learning” stage of human practitioner with machines is going to eventually create a gap in qualified human oversight

        Cue in the humanoid robots.

        Better yet: outsource the creation of “qualified oversight”, and just download/subscribe to some when needed.

        • mozzOP
          link
          fedilink
          610 months ago

          Anything a human can be trained to do, a neural network can be trained to do.

          Citation needed

          • @[email protected]
            link
            fedilink
            1
            edit-2
            10 months ago

            Humans are neural networks… you can cite me on that.

            (Notice I didn’t say anything about the complexity, structure, or fundamental functioning of a human neural network. All points to modern artificial NNs being somewhat on a tangent to humans… but also that there is some overlap already, and that it can be increased)

            • mozzOP
              link
              fedilink
              610 months ago

              Humans are a lot more than the mathematical abstraction that is a neural network.

              You could say that you believe that any computational task that a human brain can accomplish, a neural network can also accomplish (simply assuming that all of the higher-level structures, different parts of the brain allocated to particular tasks, the way it encodes and interacts with memories and absorbs new skills, variety of chemical signals which communicate more than a simple number 0 through 1 being sent through each neuron-to-neuron connection, is abstractable within the mathematical construct of a neural network in some doable way). But that’s (a) not at all obvious to me (b) not at all the same as simply asserting that we’ve got it all tackled now that we can do some great stuff with neural networks © not implying anything at all about how soon it’ll happen (i.e. could take 5 years, or 500, although my feeling is probably on the shorter side as well).

              • @[email protected]
                link
                fedilink
                1
                edit-2
                10 months ago

                Artificial NNs are simulations (not “abstractions”) of animal, and human, neural networks… so, by definition, humans are not more than a neural network.

                simple number 0 through 1

                Not how it works.

                Animal neurons respond as a clamping function, with a constant 0 output up to some threshold, where they start outputting neurotransmitters as a function of the input values. Artificial NNs have been able to simulate that for a while.

                Still, for a long time it used to be thought that copying the human connectome and simulating it, would be required to start showing human-like behaviors.

                Then, some big surprises came from a few realizations:

                1. You don’t need to simulate the neurons, just the relationship between inputs and outputs (each one can be seen as the level of some neurotransmitter in some synapse).
                2. A grid of values, can represent the connections of more neurons than you might think (most neurons are not connected to most others, the neurotransmitters don’t travel too far, they get reabsorbed, and so on).
                3. You don’t need to think “too much” about the structure of the network; add a few extra trillion connections to a relatively simple stack, and the network can start passing the Turing test.
                4. The values don’t need to be 16bit floats, NNs quantized to as little as 4bit (0 through 16) can still show pretty much the same behavior.

                There are still a couple things to tackle:

                1. The lifetime of a neurotransmitter in a synapse.
                2. Neuroplasticity.

                The first one is kind of getting solved by attention heads and self-reflection, but I’d imagine adding extra layers that “surface” deeper states into shallower ones, might be a closer approach.

                The second one… right now we have LoRAs, which are more like psychedelics or psychoactive drugs, working in a “bulk” kind of way… with surprisingly good results, but still.

                Where it really will start getting solved, is with massive scale neuromorphic hardware accelerators the size of a 1TB microSD card (proof of concept is already here: https://www.science.org/doi/10.1126/science.ade3483 ), which could cut down training times by 10 orders of magnitude. Shoving those into a billion smartphones, then into some humanoid robots, is when the NN age will really get started.

                Whether that’s going to take more or less than 5 years, it’s hard to say, but surely everyone is trying as hard as possible to make it less.

                Then, imagine a “trainee” humanoid robot, with maybe 1000 accelerators of those, that once it trains a NN for whatever task, can be copied over to as many simple “worker” robots as needed. Imagine a company spending a few billion USD on training a wide range of those NNs, then offering a per-core subscription to other companies… at a fraction of the cost of similarly trained humans.

                TL;DR: we haven’t seen nothing yet.

                • mozzOP
                  link
                  fedilink
                  410 months ago

                  by definition, humans are not more than a neural network.

                  Imma stop you right there

                  What’s the neural net that implements storing and retrieving a specific memory within the neural net after being exposed to it once?

                  Remember, you said not more than a neural net – anything you add to the neural net to make that happen shouldn’t be needed, because humans can do it, and they’re not more than a neural net.

                • @[email protected]
                  link
                  fedilink
                  1
                  edit-2
                  10 months ago

                  We don’t even know what consciousness or sentience is, or how the brain really works. Our hundreds of millions spent on trying to accurately simulate a rat’s brain have not brought us much closer (Blue Brain), and there may yet be quantum effects in the brain that we are barely even beginning to recognise (https://phys.org/news/2022-10-brains-quantum.html).

                  I get that you are excited but it really does not help anyone to exaggerate the efficacy of the AI field today. You should read some of Brooks’ enlightening writing like Elephants Don’t Play Chess, or the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

                  • @[email protected]
                    link
                    fedilink
                    1
                    edit-2
                    10 months ago

                    Where did I exaggerate anything?

                    We don’t even know what consciousness or sentience is, or how the brain really works.

                    We know more than you might realize. For instance, consciousness is the ∆ of separate brain areas; when they go all in sync, consciousness is lost. We see a similar behavior with NNs.

                    It’s nice that you mentioned quantum effects, since the NN models all require a certain degree of randomness (“temperature”) to return the best results.

                    trying to accurately simulate a rat’s brain have not brought us much closer

                    There lies the problem. Current NNs have overcome the limitations of 1:1 accurate simulations by solving only for the relevant parts, then increasing the parameter counts to a point where they solve better than the original thing.

                    It’s kind of a brute force approach, but the results speak for themselves.

                    the airoplane analogy (https://rodneybrooks.com/an-analogy-for-the-state-of-ai/).

                    I’m afraid the “state of the art” in 2020, was not the same as the “state of the art” in 2024. We have a new tool: LLMs. They are the glue needed to bring all the siloed AIs together, a radical change just like that from air flight to spaceflight.

        • @[email protected]
          link
          fedilink
          410 months ago

          I’m assuming you’re being facetious. If not…well, you’re on the cutting edge of MBA learning.

          There are still some things that just don’t get into books, or drawings, or written content. It’s one of the drawbacks humans have - we keep some things out our brains that just never make it to paper. I say this as someone who has encountered conditions in the field that have no literature on the effect. In the niches and corners of any practical field there are just a few people who do certain types of work, and some of them never write down their experiences. It’s frustrating as a human doing the work, but it would not necessarily be so to a ML assistant unless there is a new ability to understand and identify where solutions don’t exist and go perform expansive research to extend the knowledge. More importantly, it needs the operators holding the purse to approve that expenditure, trusting that the ML output is correct and not asking it to extrapolate in lieu of testing. Will AI/ML be there in 20 years to pick up the slack and put it’s digital foot down stubbornly and point out that lives are at risk? Even as a proponent of ML/AI, I’m not convinced that kind of output is likley - or even desired by the owners and users of the technology.

          I think AI/ML can reduce errors and save lives. I also think it is limited in the scope of risk assessment where there are no documented conditions on which to extrapolate failure mechanisms. Heck, humans are bad at that, too - but maybe more cautious/less confident and aware of such caution/confidence. At least for the foreseeable future.

          • @[email protected]
            link
            fedilink
            110 months ago

            we keep some things out our brains that just never make it to paper

            ISO 9001 would like to talk to all those people and have them either document, or see the door. Not really cutting edge, more of a basic business certification to even dream about bidding for any government related project (then, people still lie and don’t keep everything documented… and shit happens, but such are people).

            some of them never write down their experiences

            Get a humanoid learning robot, you’ll have a log of everything it experienced at the end of the day, with exact timestamps, photos, and annotations.

            understand and identify where solutions don’t exist and go perform expansive research to extend the knowledge

            Auto-GPT does it. The operator’s purse is why it doesn’t get used much more 😉

        • @[email protected]
          link
          fedilink
          410 months ago

          Anything a human can be trained to do, a neural network can be trained to do.

          Come on. This is a gross exaggeration. Neural nets are incredibly limited. Try getting them to even open a door. If we someday come up with a true general AI that really can do what you say, it will be as similar to today’s neural nets as a space shuttle is to a paper airoplane.

            • @[email protected]
              link
              fedilink
              110 months ago

              I wouldn’t say 74k is consumer grade but Spot is very cool. I doubt that it is purely a neural net though, there is probably a fair bit of actionismnat work.

          • @[email protected]
            link
            fedilink
            1
            edit-2
            10 months ago

            Try getting them to even open a door

            For now there is: AI vs. Stairs, you may need to wait for a future video for “AI vs. Doors” 🤷

            BTW, that is a rudimentary neural network.

            • @[email protected]
              link
              fedilink
              210 months ago

              I’ve seen a million of such demos but simulations like these are nothing like the real world. Moravec’s paradox will make neural nets look like toddlers for a long time to come yet.

              • @[email protected]
                link
                fedilink
                110 months ago

                Well, that particular demo is more of a cockroach than a toddler, the neural network used seems to not have even a million weights.

                Moravec’s paradox holds true because of two fronts:

                1. Computing resources required
                2. Lack of formal description of a behavior

                But keep in mind that was in 1988, about 20 years before the first 1024-core multi-TFLOP GPU was designed, and that by training a NN, we’re brute-forcing away the lack of a formal description of the algorithm.

                We’re now looking towards neuromorphic hardware on the trillion-“core” scale, computing resources will soon become a non-issue, and the lack of formal description will only be as much of a problem as it is to a toddler… before you copy the first trained NN to an identical body and re-training costs drop to O(0)… which is much less than even training a million toddlers at once.

    • @[email protected]
      link
      fedilink
      710 months ago

      He has literal examples of head count increasing due to this use of ai, he’s not the idiot here.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        Anecdote are not statistics.

        Head counts increasing at one company are often offset by losses from their competitors as they take market share due to increased productivity.

        The number of auto mechanics went up as the number of horse ranchers went down.

        • @[email protected]
          link
          fedilink
          110 months ago

          Lack of anecdotes and data definitely isn’t data.

          There’s an argument to be made here but the OP hasn’t made it, just asserted a well written evidenced post is written by an idiot

    • @[email protected]
      link
      fedilink
      410 months ago

      As someone who works for a very large company, on a team with around 500 people around the world, this is what concerns me. Our team will not be 500 people in a few years, and if it is, it’s because usage of our product has grown substantially. We are buying heavily into AI, and yet people are buying it when our leadership teams claim it will not impact jobs.

      Will I be able to take a unit of 2 people down to 0 people? No, I’ve never seen a process where I could eliminate every human.

      Socially speaking, this is also very concerning to me. I’m afraid that implementation of AI will be yet another thing that makes it difficult for smaller businesses to compete in a global marketplace. Yes, a tech-minded company can leverage a smaller head count into more capabilities, but this typically requires more expensive and limiting turnkey solutions, or major investment into developers of a customized solution.

      • mozzOP
        link
        fedilink
        610 months ago

        I honestly have no idea what the solution is. To me the issue is that with technology where it is, only about 20% of us actually have to do any work to keep all the wheels turning and provide for everyone. So far, in the western world, the solution has been to occupy people with increasingly-bullshit jobs (and, for some reason, not giving a lot of people who do the actual work enough to live on), but as technology keeps getting more and more powerful we’re more and more being faced with the limits of “you have to work to live” as a way to set things up.

        • lad
          link
          fedilink
          110 months ago

          The solution to both bullshit jobs and no life could have been to downscale work time not amount of people. If 20% of people is enough to do the job, maybe it’s better to keep everyone but let them work only 20% of time?

          That won’t pass the shareholders’ vote, of course, because optimization must only mean “money optimization”