We don’t fully know or are able toodel the details of neurochemistry, but we know some essential features which we can model, action potentials in spiking neuron models for example.
It’s likely that the details don’t actually matter much. Take traffic jams as an example. There is lots of details going on, driver psychology, the physical mechanics of the car etc. but you only need a handful of very rough parameters to reproduce traffic jams in a computer.
That’s the thing with “emergent” phenomena, they are less complicated than the sum of their parts, which means you can achieve the same dynamics using other parts.
Even if you ignore all the neuromodulatory chemistry, much of the interesting processing happens at sub-threshold depolarizations, depending on millisecond-scale coincidence detection from synapses distributed through an enormous, and slow-conducting dendritic network. The simple electrical signal transmission model, where an input neuron causes reliable spiking in an output neuron, comes from skeletal muscle, which served as the model for synaptic transmission for decades, just because it was a lot easier to study than actual inter-neural synapses.
But even that doesn’t matter if we can’t map the inter-neuronal connections, and so far that’s only been done for the 300 neurons of the c elegans ganglia (i.e., not even a ‘real’ brain), after a decade of work. Nowhere close to mapping the neuroscientists’ favorite model, aplysia, which only has 20,000 neurons. Maybe statistics will wash out some of those details by the time you get to humans 10^11 neuron systems, but considering how badly current network models are for predicting even simple behaviors, I’m going to say more details matter than we will discover any time soon.
Thanks fellow traveller for punching holes in computational stupidity. Everything you said is true but I also want to point out that the brain is an analog system so the information in a neuron is infinite relative to a digital system (cf: digitizing analog recordings). As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.
As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.
So it’s not infinite and can be digitized. :)
But to be more serious, digitized analog recordings is a bad analogy because audio can be digitized and perfectly reproduced. Nyquist- Shannon theory means the output can be perfectly reproduced. It’s not approximate. It’s perfect.
It’s an analogy. There is actually an academic joke about the point you are making.
A mathematician and an engineer are sitting at a table drinking when a very beautiful woman walks in and sits down at the bar.
The mathematician sighs. “I’d like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. There’ll always be some finite distance between us.”
The engineer gets up and starts walking. “Ah, well, I figure I can get close enough for all practical purposes.”
The point of the analogy is not that one can’t get close enough so that the ear can’t detect a difference, it’s that in theory analog carries infinite information. It’s true that vinyl recordings are not perfect analog systems because of physical limitations in the cutting process. It’s also true for magnetic tape etc. But don’t mistake the metaphor for the idea.
Ionic movement across membranes, especially at the scale we are talking about, and the density of channels in the system is much closer to an ideal system. How much of that fidelity can you lose before it’s not your consciousness?
"I’d like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. "
I get it’s a joke but that’s a bad joke. That’s a convergent series. It’s not infinite. Any 1st year calculus student would know that.
"it’s that in theory analog carries infinite information. "
But in reality it can’t. The universe isn’t continous, it’s discrete. That’s why we have quantum mechanics. It is the math to handle non contiguous transitions between states.
How much of that fidelity can you lose before it’s not your consciousness?
That can be tested with c elegans. You can measure changes until a difference is propagated.
Measure differences in what? We can’t ask *c. elegans * about it’s state of mind let alone consciousness. There are several issues here; a philosophical issue here about what you are modeling (e.g. mind, consciousness or something else), a biological issue with what physical parameters and states you need to capture to produce that model, and how you would propose to test the fidelity of that model against the original organism. The scope of these issues is well outside a reply chain in Lemmy.
Analog signals can only be “perfectly” reproduced up to a specific target frequency. Given the actual signal is composed of infinite frequencies, you needs twice infinite sampling frequency to completely reproduce it.
“The mean free path in air is 68nm, and the mean inter-atomic spacing is some tens of nms about 30, while the speed of sound in air is 300 m/s, so that the absolute maximum frequency is about 5 Ghz.”
One cubic centimeter of air contains 90,000,000,000,000 atoms. In that context, mean free path is 68nm up to the limits of your ability to measure. That is flip a coin 90 million million times and average the heads and tails. It’s going to be extremely close to 50%.
Not to mention that at 5ghz, the sound can only propagate 68 nm.
Yes the connectome is kind of critical. But other than that, sub threshold oscillations can and are being modeled. It also does not really matter that we are digitizing here. Fluid dynamics are continuous and we can still study, model and predict it using finite lattices.
There are some things that are missing, but very clearly we won’t need to model individual ions and there is lots of other complexity that will not affect the outcome.
I heard a hypothesis that the first human made consciousness will be an AI algorithm designed to monitor and coordinate other AI algorithms which makes a lot of sense to me.
Our consciousness is just the monitoring system of all our bodies subsystems. It is most certainly an emergent phenomenon of the interaction and management of different functions competing or coordinating for resources within the body.
To me it seems very likely that the first human made consciousness will not be designed to be conscious. It also seems likely that we won’t be aware of the first consciousnesses because we won’t be looking for it. Consciousness won’t be the goal of the development that makes it possible.
I’d say the details matter, based on the PEAR laboratory’s findings that consciousness can affect the outcomes of chaotic systems.
Perhaps the reason evolution selected for enormous brains is that’s the minimum necessary complexity to get a system chaotic enough to be sensitive to and hence swayed by conscious will.
PEAR? Where staff participated in trials, rather than doing double blind experiments? Whose results could not be reproduced by independent research groups? Who were found to employ p-hacking and data cherry picking?
You might as well argue that simulating a human mind is not possible because it wouldn’t have a zodiac sign.
Counterpoint, from a complex systems perspective:
We don’t fully know or are able toodel the details of neurochemistry, but we know some essential features which we can model, action potentials in spiking neuron models for example.
It’s likely that the details don’t actually matter much. Take traffic jams as an example. There is lots of details going on, driver psychology, the physical mechanics of the car etc. but you only need a handful of very rough parameters to reproduce traffic jams in a computer.
That’s the thing with “emergent” phenomena, they are less complicated than the sum of their parts, which means you can achieve the same dynamics using other parts.
Even if you ignore all the neuromodulatory chemistry, much of the interesting processing happens at sub-threshold depolarizations, depending on millisecond-scale coincidence detection from synapses distributed through an enormous, and slow-conducting dendritic network. The simple electrical signal transmission model, where an input neuron causes reliable spiking in an output neuron, comes from skeletal muscle, which served as the model for synaptic transmission for decades, just because it was a lot easier to study than actual inter-neural synapses.
But even that doesn’t matter if we can’t map the inter-neuronal connections, and so far that’s only been done for the 300 neurons of the c elegans ganglia (i.e., not even a ‘real’ brain), after a decade of work. Nowhere close to mapping the neuroscientists’ favorite model, aplysia, which only has 20,000 neurons. Maybe statistics will wash out some of those details by the time you get to humans 10^11 neuron systems, but considering how badly current network models are for predicting even simple behaviors, I’m going to say more details matter than we will discover any time soon.
Thanks fellow traveller for punching holes in computational stupidity. Everything you said is true but I also want to point out that the brain is an analog system so the information in a neuron is infinite relative to a digital system (cf: digitizing analog recordings). As I tell my students if you are looking for a binary event to start modeling, look to individual ions moving across the membrane.
So it’s not infinite and can be digitized. :)
But to be more serious, digitized analog recordings is a bad analogy because audio can be digitized and perfectly reproduced. Nyquist- Shannon theory means the output can be perfectly reproduced. It’s not approximate. It’s perfect.
https://en.m.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
It’s an analogy. There is actually an academic joke about the point you are making.
The point of the analogy is not that one can’t get close enough so that the ear can’t detect a difference, it’s that in theory analog carries infinite information. It’s true that vinyl recordings are not perfect analog systems because of physical limitations in the cutting process. It’s also true for magnetic tape etc. But don’t mistake the metaphor for the idea.
Ionic movement across membranes, especially at the scale we are talking about, and the density of channels in the system is much closer to an ideal system. How much of that fidelity can you lose before it’s not your consciousness?
"I’d like to talk to her, but first I have to cover half the distance between where we are and where she is, then half of the distance that remains, then half of that distance, and so on. The series is infinite. "
I get it’s a joke but that’s a bad joke. That’s a convergent series. It’s not infinite. Any 1st year calculus student would know that.
"it’s that in theory analog carries infinite information. "
But in reality it can’t. The universe isn’t continous, it’s discrete. That’s why we have quantum mechanics. It is the math to handle non contiguous transitions between states.
That can be tested with c elegans. You can measure changes until a difference is propagated.
Measure differences in what? We can’t ask *c. elegans * about it’s state of mind let alone consciousness. There are several issues here; a philosophical issue here about what you are modeling (e.g. mind, consciousness or something else), a biological issue with what physical parameters and states you need to capture to produce that model, and how you would propose to test the fidelity of that model against the original organism. The scope of these issues is well outside a reply chain in Lemmy.
Analog signals can only be “perfectly” reproduced up to a specific target frequency. Given the actual signal is composed of infinite frequencies, you needs twice infinite sampling frequency to completely reproduce it.
There aren’t infinite frequencies.
“The mean free path in air is 68nm, and the mean inter-atomic spacing is some tens of nms about 30, while the speed of sound in air is 300 m/s, so that the absolute maximum frequency is about 5 Ghz.”
The term “mean free path” sounds a lot like an average to me, implying an distribution which extends beyond that number.
One cubic centimeter of air contains 90,000,000,000,000 atoms. In that context, mean free path is 68nm up to the limits of your ability to measure. That is flip a coin 90 million million times and average the heads and tails. It’s going to be extremely close to 50%.
Not to mention that at 5ghz, the sound can only propagate 68 nm.
Yes the connectome is kind of critical. But other than that, sub threshold oscillations can and are being modeled. It also does not really matter that we are digitizing here. Fluid dynamics are continuous and we can still study, model and predict it using finite lattices.
There are some things that are missing, but very clearly we won’t need to model individual ions and there is lots of other complexity that will not affect the outcome.
I heard a hypothesis that the first human made consciousness will be an AI algorithm designed to monitor and coordinate other AI algorithms which makes a lot of sense to me.
Our consciousness is just the monitoring system of all our bodies subsystems. It is most certainly an emergent phenomenon of the interaction and management of different functions competing or coordinating for resources within the body.
To me it seems very likely that the first human made consciousness will not be designed to be conscious. It also seems likely that we won’t be aware of the first consciousnesses because we won’t be looking for it. Consciousness won’t be the goal of the development that makes it possible.
I’d say the details matter, based on the PEAR laboratory’s findings that consciousness can affect the outcomes of chaotic systems.
Perhaps the reason evolution selected for enormous brains is that’s the minimum necessary complexity to get a system chaotic enough to be sensitive to and hence swayed by conscious will.
PEAR? Where staff participated in trials, rather than doing double blind experiments? Whose results could not be reproduced by independent research groups? Who were found to employ p-hacking and data cherry picking?
You might as well argue that simulating a human mind is not possible because it wouldn’t have a zodiac sign.