The other thoughts here largely provide within-indivudal examples: others noted Hellen Keller and that some folks do not experience internal monologue. These tell us about the sort of thinking that does happen within a person, but I think that there are many forms of communication which are not linguistic, and therefore there is also external thinking which is non-linguistic.
The observation that not all thought utilizes linguistic representations (see particularly the annotated references in the bibliography) tells us something about the representations that may be useful for reasoning, thought, etc. That though language _can_ represent the world it is both not the only way and certainly not the only way used by biological beings.
^[0]: It Takes Two to Think https://www.nature.com/articles/s41587-023-02074-2
There are a lot of things I can think about that I do not have words for. I can only communicate these things in a unclear way, as language is clearly a subset of thought, not a superset.
Only if your definition of thought is that is is language based, which is just typical philosophy circular logic.
Learning a second language let me notice how much of language has no content. When you're listening to meaningless things in your second language, you think you're misunderstanding what they're saying. When you listen to meaningless things in your first language, you've been taught to let the right texture of words slip right in. That you can reproduce an original and passable variation of this emptiness on command makes it seem like it's really cells indicating that they're from the same organism, not "thought." Not being able to do it triggers an immune response.
The fact that we can use it to encode thoughts for later review confuses us about what it is. The reason why it can be used to encode thoughts is because it was used to train us from birth, paired with actual simultaneous physical stimulus. But the physical stimulus is the important part, language is just a spurious association. A spurious association that ultimately is used to carry messages from the dead and the absent, so is essential to how human evolution has proceeded, but it's still an abused, repurposed protocol.
I'm an epiphenomenalist, though.
What on earth do you mean?
The original Vygotsky claim was that learning a language introduces the human mind to thinking in terms of symbols. Cats don't do it; infants don't either.
It's more the latter for me. I don't think there's necessarily one type of internal thought, I think there's likely a multimodal landscape of thought. Maybe spatial reasoning modes are more geometric, and linguistic modes are more sequential.
I think the human brain builds predictive models for all of its abilities for planning and control, and I think all of these likely have a type of thought for planning future "moves".
I just have to remember how I built something and where the code is. We can take a quick dive into the code base and I don't have to yet again attempt to serialize my mental model of my system into something someone else may understand.
It can be difficult to explain why using the path on the underlying mount volume's EBS volume to carry meta data through filebeat, logstash, redis and kinesis to that little log stream processor was in fact the cleanest solution and how SMS was invented. It's easier when you can get the LLM to do it ;)
The language model is exclusively built upon the symbols present in the training set, but various layers can capture higher level patterns of symbols and patterns of patterns. Depending on how you define symbolic representation, the manipulation of the more abstract patterns of patterns may be what you are getting at.
Agreed, this bears repeating. This point is not obvious to someone interacting with the LLM. Because it is able to mash up custom responses doesn’t make it a thinking machine, the thinking was done ahead of time as is the case when you read a book. What passes for intelligence here is the mash-up, a smooth blending of digested text, which was selected by statistical relevance.
For the former task, they're brilliant but everyone seems to have fallen for the branding and forgotten the technology behind it. Given an input, they set off a chain reaction of probability that results in structured language, in the form of tokens, as the output. The structure of that language is easier to predict - you ask it for an app that's your next business idea and it'll give you an app that looks like your next business idea. And that's it.
Because that's all you've given it. It's not going to fill in the blanks for you. It can't. Not its job.
If you were building a workflow, would you put something called "Generative" in one of those diamond shaped boxes that normally controls flow? That sounds more like a source to me, something to be filtered and gated and shaped before use.
That's what context is supposed to be for. Not "here's a series of instructions now go do them"
They'll be lost before they get to number three, they have no sense of time you know. Cause and effect is a simulation at best. They have TodoWrites now, those are brilliant for best approximation which is really all we need at the moment, but procedural prompting is still why everyone thinks "AI" (/Generative/AI) is broken.
They're going to give the same structured text regardless, you asked for a program after all. Give them more context, you call it RAG, I call it a nice chat - whatever it is, you are responsible for the thinking in the partnership. They're the hyperactive neurodivergent kid that can type 180wps and remembers all of StackOverflow, you're the patient parent that needs to remind them to clean their room before they go out (or completely remove all traces of the legacy version of feature X that you just upgraded so you don't end up with 4 overlapping graph representations). You're responsible for the remembering, you're responsible for the thinking - they're just responsible for amplifying your thoughts and letting you explore solution spaces you might not have had the time for otherwise.
Or you can build something to help you do that. Structured memory (mine's spatial, the direction of the edges itself encodes meaning) with computational markdown as the storage mechanism so we can embed code, data and prose in the same node.
I demoed a thing on here the other day that shows how I setup Rspec tests that execute when you read the spec that describes the feature you're building. A logical evolution of Obie's keynote. Now they just do it automatically (mostly, if they're new - fresh context - I have to reference the tag that explains the pattern so they pick it up first)
It's still not thinking in the traditional sense of the word, where some level of conscious rationality is responsible for breakthrough. Given, however, how much of human progress has been through accident (Fleming, Spencer, Goodyear, Fahlberg, Rontgen, Hofmann) or misunderstanding (Penzias and Wilson, Post Its, Viagra).
Most of human break through has been through pattern recognition, conscious or unconscious. For that, language is all that is needed and language is sufficient. If an idea can be described by language and if we suppose that the grammar of a language allows its atoms (and therefore its ideas) to be composed and decomposed, then does it not allow then that a consciousness (machine or otherwise) trained in the use of that language can form new ideas through the mere act of synthesis?
Imagine a post-apocalyptic scenario where people keep the tradition of following Polly the Robot in a ritual tour of Labs of Eemaeetee - but none remember what the sounds made by Polly used to refer to, or indeed that they referred to anything. That wouldn't preclude humans to learn to reproduce Polly's liturgy, or even burn at the stake curious folk trying to decode its ancient meaning.
Well, I think we've already been there for a while.
i think this: you dont need language for an idea, to have it, or be creative.
to think about it outside of that, like asking critical questions, inner dialogue _about_ the ideas and creativity, that is i think what is 'thought' and that requires language as its sort of inner communication....
and pre-heidegger, pre-psychoanalysis, what then, how did somebody, e.g. heidegger, think of those thoughts without the vocabulary to do so? ahhh, apparently, they didn't need to. Turns out, language is not required for thought, thought can invent language.
Words are essentially very poor forms of interoception or metacognition. They "explain" our thoughts to us by fooling us. Yet how much of the senses/perceptions are accessible in consciousness. Not very much. The computer serves to further the maladaption by both accelerating the symbols and sutomating them, which puts the initial real events even further from reach. The only game is how much we can fool the species through the lowres inputs the PFC demands. This appears to be a sizable value center for Silicon Valley, and it seems to require coders to ignore the whole of experience and rely solely on the bottleneck simulations centers of the PFC which themselves are disconnected from direct sensory access. Computers, 'social' media, AI, code, VR essentially "play" the PFC.
How these basic thought experiments that have been tested in cog neuroscience since the 90s in the overthrow of the cog sci models of the 40s-80s were not taught as primer classes in AI and comp sci is beyond me. It takes now third gen neurobiology crossed with linguistics to set the record straight.
These are not controversial ideas now.
I was bemused, and thought... "people think in words?"
Apparently people with ADHD or Autism can develop the inner voice later in life.
In my 20s, language colonized my brain. Took me years of meditation to get some peace and quiet back...
I do meditate here and now, but sooner or later the constant stream of words will 100% set in again, usually during or immediately after meditation. And these words for example tell me or discuss whether I should go shower, go to gym, do dishes, or whatever. And in the end I'll decide based on that discussion and do it. It's weird how defined I am by this inner voice.
I struggle to imagine how people can find the time to consider all of these trivial choices verbally - in my case it all happens almost instantaneously and the whole process is easy to miss. I also don’t see what the monologue adds to the process - just skip this part and make the decision!
That said, I do use an inner voice when writing, preparing what to say to someone, etc. and I feel like I struggle with this way of thinking much more.
Edit: maybe this is like the difference between a diffusion model and a "next token" model. I always feel a need to jump around and iteratively refine the whole picture at once. Hard to maintain focus.
Without that, one does not learn quickly what another human already thought and tried out in the past (2 hours or 2 years or 2 millenia ago, does not matter), the civilization never progresses to the point it has, and we reinvent all the same things repeatedly ("look ma, I strapped a rock to a stick and now I can bash lion's head in").
So really, if you struggle with this part of the process, you'd need to rely on somebody else who can understand your "invention" as well as you do, and can do a good job of putting it into words.
Really, this is what makes the academic process, well, academic.
I believe many of the things you bring up still involve symbolic reasoning (eg. how do you decide when is too much coffee if you do not think in representation of "I had N or too-many"? how do you consider code transformations unless you think in terms of the structure you have and you want to get to?).
It's no surprise that one is good with one language and sucks at the other, though: otherwise, we'd pick up new languages much faster. And not struggle as much with different types of languages as much (both spoken — think tonal vs not, or Hungarian vs anything else ;) — and programming — think procedural vs functional).
So spoken/written languages are one symbolic way to express our internal cognition, but even visual reasoning can be symbolic (think non-formal and formal flowcharts, graphs, diagrams... eg. things like UML or algorithm boxes use precisely defined symbols, but they don't have to be as precise to be happening).
It is a question if it is useful to make a distinction between all reasoning and that particular type of reasoning, and reuse a common, related word ("thinking", "thought"), or not?
You don't notice it, but that inner voice is only on the surface. It is generated from what's going on deeper. You may not notice it is very good at occupying your attention. Your "real" thoughts are deeper, then we have processes generating speech based on our deeper structures.
Language communication is not a true representation of what you know. It is a messy iterative process when we try to externalize in words what we know. We also end up with people having the same words who don't understand one another.
An instance of that is the often used (at least on reddit) bell curve meme - https://i.imgur.com/cUOiP2d.jpeg
It is not that the person on the right has the same understanding as the one on the left. It is far deeper, but you end up using the same words. The knowledge behind the words is hard to express, when you try you will not end up truly conveying your internal state. The words are iteratively and messily derived from exploring your inner state, with varying success.
For better or worse, language has the attention of the people. We end up with magical tales about "true names", where knowing an entities "true name" gives you full control. Or with magic that is invoked by speaking certain phrases, and the universe obliges. Or with heated discussions about arbitrary definitions when it rarely matters, and when you really shouldn't, because if you get to the inevitably fuzzy edges of the actual concepts behind words you should just switch to other words and metaphors that have the subject you are interested in discussing in the middle instead of at the edge. In reality, our internal models and thinking are hidden in our not that well understood (except in the minute details, those we know a great deal of) neural networks.
I mostly agree with you but I always find it a bit funny how we are the only things/beings that seem to be aware of their own (meta)cognition yet I can't actually pop up my hood like a car so to speak to understand what actually goes on. It gets funnier when we generally can't agree what goes on in our heads by just talking about it with each other. I don't suppose the fox thinks about why did it enter the hen house after the meal, what led it to such an act.
More related when I wrote this comment I still can't tell if I engaged my inner monologue and wrote by dictation as it were or if I let my fingers do the thinking and I read back what they wrote.
Discussions about the mind's eye and inner monologue and so on are always fun but most of the time I never get that much out of them other than satisfying curiosity.
As an aside I remember reading somewhere that some speed reading techniques involve not speaking in your mind the words you're reading (forego your inner monologue) and just internalizing their form and their associated meaning that you already know or something like that.
Occasionally there is some snippet of a sentence I imagine, but it’s almost always cut off prior to finishing the sentence. If I imagine writing something, though, I’ll speak it to myself in my head.
Funnily enough, I’m a pretty weak mental visualiser too. I don’t have aphantasia but metal images are very transparent and dark.
This is just a mistake on your part. Your thoughts are already not in words.
I don’t really think in language either. To me thought is much more a kind of abstract process
It’s also consistent with our intuition that toddlers have consciousness and thoughts and other mammals at least consciousness (and emotions) without language.
What do you mean "think in words"? Is it like a narrator, or a discussion like Herman's Head? Are you hearing these words all the time or only when making decisions?
If people don't have an inner voice, it also must be the case the some people (these people?) don't have consciousness. It isn't obvious that consciousness is essential to fitness, especially of an inner voice isn't. Some people may be operating as automatons.
Don’t see how you got to that.
Another issue is that a lot of tasks in the modern world are rooted in language, law or philosophy is in large part just word games, you won't be able to get far thinking about them without language, as those concept don't have any direct correlate that you could experience by other means.
Overall I do agree that there are plenty of problems we can solve without language, but the type of problems that can and can't be solve without language would need some further delineation.
I wouldn't call those underlying processes "thinking", but it is a matter of definition.
This is also why those who just use LLMs to write those court submissions we've read about fail: there was no non-thinking reasoning happening, but just a stream of words coming out, and then you need to validate everything, which is hard, time-consuming and... boring.
Language is, therefore, instrumental to human thought as distinct from animal thought because it allows us to more easily acquire and develop new patterns of thinking.
In my personal experience, my mind became much less busy as a result of several steps. One being abandoning the theory of mind -- in contrast to spiritual practices such as Zen and forms of Hinduism, where controlling the mind, preventing its misbehavior, or getting rid of it somehow is frequently described as a goal, the mind's activity being to blame for a loss of a person's ability to be present in the here and now.
As a teenager, I can remember trying to plan in advance what I will say to a person when faced with a situation of conflict, or maybe desire toward the opposite sex, doubting that language will reliably sprout from my feelings when facing a person, whose facial reactions (and my dependence on their good will) pulls me out of my mental emotional kinesthetic grounding.
As humans we use language, however, it seems possible to live in our experience. Some people who are alienated from their experience, or overwhelmed by others, seek refuge in language.
There is obviously a gap between research such as this, and how someone can make sense of their agency in life, finding their way forward when confronted with conflict, uncertainty, etc.
First) This is correct in a trivial and incorrect in profound ways.
Trivial Correct: Clearly language is, at best, a lossy channel for thought. It isn't thought compressed, it is thought where the map would be too complex for language and so we draw a kindergarten scribble we all agree on, and that covers a lot of ground as a an imperfect pointer. This description is itself imperfect, but as a rough sketch not too controversial.
Profound Incorrect: As pointers, it facilitates thought in complex ways that would be incredible difficult otherwise. Abstractions you can build on like building blocks and, so long as your careful about understanding where the word ends and doesn't encompass the full thing, you reduce the risk of reifying the word overmuch. It's not thought, but is isn't thought in some-- not all-- of the ways in which a building's walls is not its interior spaces. Of course it isn't. The space would be there either way, but keeping it all arranged so nicely and easily to reference different elements of it, that is more than just convenience and it is inextricable from language, or at least some representational system for doing this sort of thing.
Second) It is so strange to see this sort of thing written about, in this way, as if it were a new conception, a new view of language. But then I look at the researchers involved: near always backgrounds outside the formal study of linguistics, language itself, and instead focused in other areas adjacent or related. Even computational linguistics-- perhaps especially computational linguistics. The educational pathway there is much more commonly coming from computational paths to applications to language, rather than vice versa. This is much less the case with Bioinformatics and Computational Biology, where traditional biology is much more often within a student's foundation. (This is not anecdotal, analysis of student pathways through academic studies is a past area of my own professional career)
Through the lens of the history of academia over the past few decades, this is not all that surprising. Chomsky's fault (my opinion) for trying to wall off the discipline from other areas of study or perspective other than his own.
Perhaps there are also multiple human paths to higher-level thought, with Keller (who lost her sight) using the language facility while others don't have to.
* Given Box 1 contents, the article authors seem unaware of the research on this? e.g.
https://www.youcubed.org/resource/visual-mathematics/
https://www.hilarispublisher.com/open-access/seeing-as-under...
Up until that point language was just an extension of what she already knew, it was the learning of being other that did the trick. Being blind and deaf would certainly make it hard to draw a distinction between the self and the world, and while languaged helped her get that concept under wraps, i dont think it's strictly speaking required. Just one of many avenues towards.
If there are other avenues other than language, how would we know?
I think language is a medium that enables this kind of structured thought. Without it, I cannot imagine reaching this level of abstraction (understanding being a "self").
What ontological difference does it make whether a being contains "introspection" and "self-reflexivity" but not "nuclear physics" or "interpretive dance"? It's still hungry with or without them. And what good is any of those to a cat, when "meow" fills the bowl just fine?
>If there are other avenues other than language, how would we know?
Well, if you knew, you'd certainly know, tautology extremely intended.
You would just be unable to communicate it, because language would forbid it.
Not "not support it", you see, explicitly forbid it: it would not only be impossible for you to communicate it, you would be exposing yourself to danger by attempting to communicate it.
Because the arbitrary limitation of expressible complexity is what holds language in power. (Hint: if people keep responding to you in confusing ways, you may be doing extralinguistic cognition; keep it up!)
>I think language is a medium that enables this kind of structured thought. Without it, I cannot imagine reaching this level of abstraction (understanding being a "self").
Language does a bait and switch here: first it sets a normative upper bound on the efficiency of knowledge transfer, then points at the limitation and names it "knowledge".
That's stupid.
Example: "the Self", oh that pesky Self, what is its true nature o wise ones? It's just another fucking linguistic artifact, that's what it is; "self-referentiality" is like the least abstract thing there is. You just got a bunch of extra unrelated stuff tacked onto that. And of course, you have an obligation to mistake that stuff for some mysterious ineffable nature and/or for yourself: if you did not learn to perform these miscognitions, the apes would very quickly begin to deny you sustenance, shelter, and/or bodily integrity.
Sincerely, your cat
I am glad humans are meaningfully smarter than chimps, and not merely more vocal. Helen Keller herself seemed to think that learning language finally helped her understand what this weird language thing was:
I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten—a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that w-a-t-e-r meant the wonderful cool something that was flowing over my hand. The living word awakened my soul, gave it light, hope, set it free!
It is not like she was constantly dehydrated because she didn't understand what water was. She realized even a somewhat open-ended concept like "water" could be given a name by virtue of being recognizable via stimulus and bodily perception. That in and of itself is quite a high-level thought!Will it be a full copy of the original algorithm - the same exact implementation? Often not.
Will it be close enough to be useful? Maybe.
LLMs use human language data as inputs and outputs, and they learn (mostly) from human language. But they have non-language internals. It's those internal algorithms, trained by relations seen in language data, that give LLMs their power.
I personally believe that our thinking is fundamentally grounded/embodied in abstract/generalized representations of our actions and experiences. These representations are diagrammatic in nature, because only diagrams allow us to act on general objects in (almost) the same way to how we act on real-world objects. With “diagrams” I mean not necessarily visual or static artefacts, they can be much more elusive, kinaesthetic and dynamic. Sometimes I am conscious of them when I think, sometimes they are more “hidden” underneath a symbolic/language layer.