Jon Stewart tackles the AI revolution and how its creators are promising a better future while building technology to make human workers obsolete. #DailySho...
I have to say, I agree 90% with Jon on this. Which is significantly less than I usually agree with him.
I think he could have talked more about the lack of reliability of AI. It’s not simply a drop in replacement for people like the invention of the conveyor belt or sewing machine. A better analogy would be the mass outsourcing of call center jobs to South Asia.
A better analogy would be the mass outsourcing of call center jobs to South Asia.
Well that’s where it’s at now. There’s no guarantee it will stay that way. Give Moore’s law several more cycles, and maybe we’ll have enough computing power to make drop in replacement humans.
I think people are misinformed about the current readiness of AI specifically because Silicon Valley VCs have taken a lot of the R&D funding market share from the DARPA government types.
VC funding decisions are heavily oriented around the prototype product demo. (No grant writing!). This encourages “fake it till you make it”: demo a fake product to get the funding to build the real one. This stuff does leak out to the public, and you end up with overstated capabilities.
Give Moore’s law several more cycles, and maybe we’ll have enough computing power to make drop in replacement humans.
There seems to be a misunderstanding of how LLM’s and statistical modelling work. Neither of these can solve their accuracy as they operate based on a probability distribution and only find correlations in ones and zeros. LLM’s generate the probability distribution internally, without supervision (a “black box”). They’re only as “smart” as the human-generated input data, and will always find false positives and false negatives. This is unavoidable. There simply is no critical thought or intelligence whatsoever — only mimicry.
I’m not saying LLM’s won’t shakeup employment, find their niche, and make many jobs redundant, or that critical general AI advances won’t occur, just that LLM’s simply can’t replace human decision making or control, and doing so is a disaster waiting to happen — the best they can do is speed up certain tasks, but a human will always be needed to determine if the results make (real world) sense.
Feels like a bit of a loop back there. “It can only ever be as smart as human output. So we’ll always need humans.” To… What? Create equivalent mistakes? Maybe LLMs in their current form won’t be the drop in replacement, but it’s a critical milestone and a sign of what’s around the corner. So these concerns are still relevant.
It’s only a matter of time before these compankes start trying to simulate human brains.
This is why I invoked Moore’s law earlier. People have already estimated how many petaflops or exaflops we need to simulate a brain’s worth of neurons and a complete connectome. We currently don’t have enough computer power. But if the exponential growth continues, we will get there.
Correction: Moore’s law predicts that the number of transistors on an integrated circuit would double every two years. It doesn’t make predictions about computers being “faster” or able to handle a certain “workload”. The only thing it predicts is the growth in physical capacity of a single chip.
And we actually broke Moore’s law and this capacity growth slowed a decade ago since manufacturing techniques started being the bottleneck.
Give Moore’s law several more cycles, and maybe we’ll have enough computing power
If it were only a matter of processing power, we’d already be able to demonstrate much more capable AIs. More computing power in more places will facilitate further development, but it’s the “further development” that’s key.
Personally, I’m looking for Moore’s Law to make home AIs more responsive and more similar to today’s cloud-based AIs.
The one I have configured is slow and not very good, but it’s running on a Raspberry Pi, so I could throw more processing at it and probably will at some point.
there was an Apple announcement several weeks ago about optimizing performance on memory-constrained devices, that has me really hopeful for effective home-based devices soon. I don’t know what Apples “neural processors” do but I know my phone has them and maybe they apply here
I mean, he isn’t wrong that it will be used to fire people and to decimate labour. In fact I don’t think he really said anything “wrong”. He just didn’t paint as complete a picture as I would have liked.
To be fair, it’s not straightforward to explain on a comedy show the nuanced problems inherent in trying to replace people with token prediction engines.
You don’t know what the writers did or didn’t know. To me they seem uninformed about the topic considering what they omitted. You don’t sound cool for being pedantic.
By that logic assuming they’re uninformed based on what they didn’t say is just as silly as saying that they are informed based on it. What they did say wasn’t wrong, so there’s no reason to automatically assume they don’t know any more than what they included for general audiences.
Nonsensical comment lol if I tell you how to make a grilled cheese and I leave out the part where you have to put butter in the pan, people are going to wonder if I actually know how to make a grilled cheese.
Bruh, this isn’t an instructional video it’s an informative one, there’s a difference. You’re not giving me a recipe in your bizarre metaphor, you’re just generally describing what a grilled cheese is and how it tastes or why people eat it. You’re not teaching people how to recreate a grilled cheese perfectly. If you left out the part where you put butter in the pan I would just assume you left out the part where you put butter in the pan, because your audience isn’t there for a recipe and doesn’t give a shit about every possible minute detail.
I have to say, I agree 90% with Jon on this. Which is significantly less than I usually agree with him.
I think he could have talked more about the lack of reliability of AI. It’s not simply a drop in replacement for people like the invention of the conveyor belt or sewing machine. A better analogy would be the mass outsourcing of call center jobs to South Asia.
Well that’s where it’s at now. There’s no guarantee it will stay that way. Give Moore’s law several more cycles, and maybe we’ll have enough computing power to make drop in replacement humans.
I think people are misinformed about the current readiness of AI specifically because Silicon Valley VCs have taken a lot of the R&D funding market share from the DARPA government types.
VC funding decisions are heavily oriented around the prototype product demo. (No grant writing!). This encourages “fake it till you make it”: demo a fake product to get the funding to build the real one. This stuff does leak out to the public, and you end up with overstated capabilities.
There seems to be a misunderstanding of how LLM’s and statistical modelling work. Neither of these can solve their accuracy as they operate based on a probability distribution and only find correlations in ones and zeros. LLM’s generate the probability distribution internally, without supervision (a “black box”). They’re only as “smart” as the human-generated input data, and will always find false positives and false negatives. This is unavoidable. There simply is no critical thought or intelligence whatsoever — only mimicry.
I’m not saying LLM’s won’t shakeup employment, find their niche, and make many jobs redundant, or that critical general AI advances won’t occur, just that LLM’s simply can’t replace human decision making or control, and doing so is a disaster waiting to happen — the best they can do is speed up certain tasks, but a human will always be needed to determine if the results make (real world) sense.
Feels like a bit of a loop back there. “It can only ever be as smart as human output. So we’ll always need humans.” To… What? Create equivalent mistakes? Maybe LLMs in their current form won’t be the drop in replacement, but it’s a critical milestone and a sign of what’s around the corner. So these concerns are still relevant.
Removed by mod
This is why I invoked Moore’s law earlier. People have already estimated how many petaflops or exaflops we need to simulate a brain’s worth of neurons and a complete connectome. We currently don’t have enough computer power. But if the exponential growth continues, we will get there.
Removed by mod
Moore’s law predicts that compared to 1980, computers in 2040 would be a BILLION times faster.
Also that compared to 1994 computers, the ones rolling out now are a MILLION times faster.
A cheap Raspberry PI would easily be able to handle the computational workload of a room full of equipment in 1984.
What would have taken a million years to calculate in 1984 would theoretically take 131 hours today and 29 seconds in 2044…
Correction: Moore’s law predicts that the number of transistors on an integrated circuit would double every two years. It doesn’t make predictions about computers being “faster” or able to handle a certain “workload”. The only thing it predicts is the growth in physical capacity of a single chip.
And we actually broke Moore’s law and this capacity growth slowed a decade ago since manufacturing techniques started being the bottleneck.
Yes yes single threaded execution etc but now we just build a crap ton more and keep increasing the computational throughput per watt etc.
We’ve moved massive calculations into GPUs and thus in terms computational capabilities it holds up.
I mean check this out https://en.wikipedia.org/wiki/FLOPS
The geometric growth is real. Moore’s law was just one way to explain it.
If it were only a matter of processing power, we’d already be able to demonstrate much more capable AIs. More computing power in more places will facilitate further development, but it’s the “further development” that’s key.
Personally, I’m looking for Moore’s Law to make home AIs more responsive and more similar to today’s cloud-based AIs.
Same boat here, this felt pretty uninformed to me.
I mean, he isn’t wrong that it will be used to fire people and to decimate labour. In fact I don’t think he really said anything “wrong”. He just didn’t paint as complete a picture as I would have liked.
To be fair, it’s not straightforward to explain on a comedy show the nuanced problems inherent in trying to replace people with token prediction engines.
Yeah, exactly what I mean
That’s not what “uninformed” means. A more appropriate term would maybe be “uninformative”.
You don’t know what the writers did or didn’t know. To me they seem uninformed about the topic considering what they omitted. You don’t sound cool for being pedantic.
By that logic assuming they’re uninformed based on what they didn’t say is just as silly as saying that they are informed based on it. What they did say wasn’t wrong, so there’s no reason to automatically assume they don’t know any more than what they included for general audiences.
Nonsensical comment lol if I tell you how to make a grilled cheese and I leave out the part where you have to put butter in the pan, people are going to wonder if I actually know how to make a grilled cheese.
Bruh, this isn’t an instructional video it’s an informative one, there’s a difference. You’re not giving me a recipe in your bizarre metaphor, you’re just generally describing what a grilled cheese is and how it tastes or why people eat it. You’re not teaching people how to recreate a grilled cheese perfectly. If you left out the part where you put butter in the pan I would just assume you left out the part where you put butter in the pan, because your audience isn’t there for a recipe and doesn’t give a shit about every possible minute detail.