Consider the words on the screen. There are two sources of information.
-
The words, how they’re arranged and such.
-
The meaning that you assign to the words. Meaning drawn from a lifetime of memories.
99% of the information comes from the assigned meaning. So 99% of what’s going on here is you talking to yourself.
So 99% of what’s going on here is you talking to yourself.
You wrote it. I’m not randomly imagining things.
Says the person who may not even have a rich inner life, at least as far as I know.
Language arises out of social behavior, beginning with imitation and reinforcement in early childhood. Children who don’t learn language (by interacting with adults and older children) in the critical period of early childhood, suffer serious developmental problems. So language is fundamentally anti-solipsistic, even anti-individualistic: you only acquire it by being brought into a community of language-users.
And written language begins as an encoding for spoken (or signed) language: everyone learns to speak (or sign) before they learn to read, and learning to read starts with learning associations between written structures and spoken ones. (For English-speakers, that means phonics: the relationship between letters or groups of letters, and sounds.)
Meaning isn’t “assigned” solipsistically; rather it’s “acquired” from a community of use. A single user can’t decide for themselves that “dog” means squirrel. I suspect that if you look at the word “dog” and try to convince yourself that it refers to a bushy-tailed tree-climbing nut-munching rodent, you will be aware that you are doing something very silly, something deliberately contrary to your knowledge.
In the past you received a symbol, and a meaning to go with that symbol.
In the present, you refer to that meaning when you see that symbol.
Yes, there was some kind of deeper communication in the past. The dictionary was written (among other things. Like the vastness of non-symbolic experience. Sensations and such).
But in the present, in the act of “reading”, it’s just you, the dictionary (symbolic and nonsymbolic) and the symbol stream. However you slice it, the dictionary is large and the symbol stream is small.
The Treachery of Images by René Margritte
The text in the painting reads “Ceci n’est pas une pipe” which (more or less) translates to “This is not a pipe.” What Margritte is saying is that literally it is not a pipe, it is a painting of a pipe. But it’s also not just a painting of a pipe, it is an image of what Margritte holds as his internal concept of a pipe.
Communicating complex ideas to others is difficult because words hold different meanings for different people and the more nuanced the more language starts to get in the way. But all communication is imperfect.
It’s not solipsistic though because much of the meaning you might hold for words is contextual to the people who taught you language and your relationships with others. Your conception of those word’s meanings are your interpretation of the meaning someone else holds. Language and communication are much more effective viewed from the perspective of collaboration rather than expression. In other words working to establish shared understanding makes communication more effective.
Ok, I’ll be the one who does it…
raises hand
What does solipsistic mean in this context? Or any context at all, for that matter…
The idea that only your own mind is certain to exist is solipsism. I think I sort of get what OP is saying. That because words don’t mean something outside of human brains they’re solipsistic.
But I disagree because the fact that someone else wrote it means it isn’t solipsistic. If you believe in solipsism (I don’t) then everything is in your own mind, not merely certain parts.
If you don’t believe in solipsism, then you acknowledge the external world exists so it’s specifically a form of communication. It feels odd to believe in the external world but look at communication, something we’ve specifically developed to transmit our internal thoughts to others, and call it solipsistic when it wouldn’t exist if we believed there was no other hidden information inside other “people’s” minds (or whatever solipsists believe people are).
It means yr all sucked up into yr head. Talking to yrself. Playing games you created.
Look at the semiotic theories stemming from Ferdinand de Saussure over a century ago: he would reverse the relative importance of your 1 and 2, arguing that words derive most of their meaning from their arrangement and interrelationships, and that most of the meaning we see in the world flows from the relationship between signs/words into our perception of their referents.
That’s an interesting way of framing it. But I think it’s more correct to say that our shared understanding of language and definitions of words means we are mostly translating the writer’s thoughts and intentions as best we can from their mind into our minds.
It’s true that no two people will be 100% in sync with their understandings and internal definitions of things, but the overlap will be quite large. So hopefully, actual communication is possible from person A to person B.
However, it’s also possible for people to “talk past each other” as we often see in political discourse. So it’s highly imperfect. But I would not say it’s just a person talking to themselves when they read what another person has posted.
https://en.m.wikipedia.org/wiki/Private_language_argument Tangential but still
Mom! Mom, the universe is talking to itself again.
???
I mean, words aren’t just used by yourself. Many people collectively assigned a meaning to specific words. How is this solipcism?
Words are ways to encode ideas like 0 and 1s in a computer or flashed of light as in a morse code.
Words are a way to express ideas from your brain to another brain.
Flowchart:
Your Brain -> Idea -> Word -> Spoken -> Vibration in air -> Interpreted by another person’s ear -> Word -> Idea -> The other person’s Brain
Does that make sense?
Its not solipcism… idk what you mean lol
-
(Is OP high? 🤔)
Each word is an idea, and each idea already exists in your head. What I’m doing isn’t giving you new ideas, but arranging the ideas to get across a meaning.
It’s like rock gardens. I didn’t make the rocks, but I order them how I like, and I can make a smiley face out of them. I can share my arrangement, and they can try to make it with their rocks. It won’t be the same smiley face, but it’s still a smiley face, and that’s all that matters.
And if there’s a rock in your garden that’s just a little TOO different to the one in mine, maybe the smiley face won’t look right. It happens. If it matters, we can try to move some rocks around and get the right arrangement for the both of us.
Your source number 2 involves a hard interdisciplinary research problem: “what is meaningful language use?” My grad school thesis was tangentially related to it, so I’m most familiar with it from the AI perspective. Early AI researchers quickly realized that you can’t just dump a dictionary into a computer and suddenly have it understand language. You can add high-level “scripts” to the computer (Schankian scripts), but then it will just be manipulating symbols (chinese room problem). You can tie the symbols to things in the world (symbol grounding) or to its own processing (embodied meaning) but how do you coordinate those symbols with other agents, be they people or machines?
Think about that last question for a moment. Do you have an answer? I don’t think anyone does yet, so whatever you’re thinking is probably a good start towards further reading. @[email protected]’s reply points out some of the issues involved, and these issues suggest the problem’s interdisciplinary nature: psychology, sociology, corpus linguistics, philosophy (both analytic i.e. Wittgenstein and Kripke, and continental as suggested by @[email protected]’s reply), cognitive science, neurolinguistics, etc. Literary theory is fun too, they say things like: an interpretation is situated, subjective, and performative. OK, sounds great, but how do you turn that into something that a computer does? It turns out that there are a lot of great ideas, but there’s still a lot of work to do to tie it all together. (and unfortunately way too many people think deep neural networks / LLMs can just solve it all by themselves grumble grumble…)
Given the above, to answer your specific question: “Is [meaningful] language mostly solipsistic?” I think most people would say “probably not”, with the caveat that it depends on how you define your terms. Clearly there are very important processes that work only in your own cognitive system, but it seems likely that external factors also play a necessary role.
No, I am a follower of Wittgenstein’s notion of “language games” where all language is socially constructed by groups of people participating in a self-perpetuating linguistic context. It is an objective phenomenon as words which have no meaning to another person cannot exist.
They aren’t different sources of info, but parts of the same process. And they’re three:
- The utterance. Like you said, the words and how they’re arranged and such.
- Your internalised knowledge. It’s all that bundle of meanings that you associate with each word, plus your ability to parse how those words are arranged.
- The context. It’s what dictates how you’re going to use your internalised knowledge to interpret the utterance; for example, selecting one among many possible meanings.
Without any of those three things, you get 0% of the info. They’re all essential.
So no, it is not solipsistic at all, since it depends on things outside your head (the utterance and the context), and those are shared by multiple individuals.