> The first is that reasoning probably requires language use. Even if you don’t think AI models can “really” reason - more on that later - even simulated reasoning has to be reasoning in human language.
That is an unreasonable assumption. In case of LLMs it seems wasteful to transform a point from latent space into a random token and lose information. In fact, I think in near future it will be the norm for MLLMs to "think" and "reason" without outputting a single "word".
> Whether AI reasoning is “real” reasoning or just a mirage can be an interesting question, but it is primarily a philosophical question. It depends on having a clear definition of what “real” reasoning is, exactly.
It is not a "philosophical" (by which the author probably meant "practically inconsequential") question. If the whole reasoning business is just rationalization of pre-computed answers or simply a means to do some computations because every token provides only a fixed amount of computation to update the model's state, then it doesn't make much sense to focus on improving the quality of chain-of-thought output from human POV.
I'm pretty much a layperson in this field, but I don't understand why we're trying to teach a stochastic text transformer to reason. Why would anyone expect that approach to work?
I would have thought the more obvious approach would be to couple it to some kind of symbolic logic engine. It might transform plain language statements into fragments conforming to a syntax which that engine could then parse deterministically. This is the Platonic ideal of reasoning that the author of the post pooh-poohs, I guess, but it seems to me to be the whole point of reasoning; reasoning is the application of logic in evaluating a proposition. The LLM might be trained to generate elements of the proposition, but it's too random to apply logic.
Problem is, even with symbolic logic, reasoning is not completely deterministic. Whether one can get to a set of given axioms from a given proposition is sometimes undecidable.
Solutions to some of the hardest problems I've had have only come after a night of sleep or when I'm out on a walk and I'm not even thinking about the problem. Maybe what my brain was doing was something different from reasoning?
This is a very important point and mostly absent from the conversation.
We have many words that almost mean the same thing or can mean ment different things - and conversations about intelligence and consciousness are riddled with them.
Not all reasoning requires language. Symbolic reasoning uses language.
Real-time spatial reasoning like driving a car and not hitting things does not seem linguistic.
Figuring out how to rotate a cabinet so that it will clear through a stairwell also doesn't seem like it requires language, only to communicate the solution to someone else (where language can turn into a hindrance, compared to a diagram or model).
When Using AI they say "Context is King". "Reasoning" models are using the AI to generate context. They are not reasoning in the sense of logic, or philosophy. Mirage, whatever you want to call it, it is rather unlike what people mean when they use the term reasoning. Calling it reasoning is up there with calling generating out put people don't like hallucinations.
You are making the same mistake OP is calling out. As far as I can tell “generating context” is exactly what human reasoning is too. Consider the phrase “let’s reason this out” where you then explore all options in detail, before pronouncing your judgement. Feels exactly like what the AI reasoner is doing.
"let's reason this out" is about gathering all the facts you need, not just noting down random words that are related. The map is not the terrain, words are not facts.
Reducing the distance of each statistical leap improves “performance” since you would avoid failure modes that are specific to the largest statistical leaps, but it doesn’t change the underlying mechanism. Reasoning models still “hallucinate” spectacularly even with “shorter” gaps.
> Sometimes, words are likely because they are grounded in ideas and facts they represent.
Yes, and other times they are not. I think the failure modes of a statistical model of a communicative model of thought are unintuitive enough without any added layers of anthropomorphization, so there remains some value in pointing it out.
Technically random can have probabilities associated with them.. Casual speech, random means equal probabilities, or we don’t know the probabilities. But for LLM token output, it does estimate the probabilities.
Since that November in which this technology boomed we have been much too often reading "people also drink from puddles", as if it were standard practice.
That we implement skills, not deficiencies, is a basic concept that is getting to such a level of needed visibility it should probably be inserted in the guidelines.
You shouldn’t be basing your entire worldview around the lowest common denominator. All kinds of writers like blog writers, novelists, scriptwriters, technical writers, academics, poets, lawyers, philosophers, mathematicians, and even teenage fan fiction writers do what I said above routinely.
>As far as I can tell “generating context” is exactly what human reasoning is too.
This was the view of Hume (humans as bundles of experience who just collect information and make educated guesses for everything). Unfortunately, it leads to philosophical skepticism, in which you can't ground any knowledge absolutely, as it's all just justified by some knowledge you got from someone else, which also came from someone else, etc., and eventually you can't actually justify any knowledge that isn't directly a result of experience (the concept of "every effect has a cause" is a classic example).
There have been plenty of epistemological responses to this viewpoint, with Kant's view, of humans doing a mix of "gathering context" (using our senses) but also applying universal categorical reasoning to schematize and understand / reason from the objects we sense, being the most well known.
I feel like anyone talking about the epistemology of AI should spend some time reading the basics of all of the thought from the greatest thinkers on the subject in history...
Feels like, but isn't. When you are reasoning things out, there is a brain with state that is actively modeling the problem. AI does no such thing, it produces text and then uses that text to condition the next text. If it isn't written, it does not exist.
Put another way, LLMs are good at talking like they are thinking. That can get you pretty far, but it is not reasoning.
The transformer architecture absolutely keeps state information "in its head" so to speak as it produces the next word prediction, and uses that information in its compute.
It's true that if it's not producing text, there is no thinking involved, but it
is absolutely NOT clear that the attention block isn't holding state and modeling something as it works to produce text predictions. In fact, I can't think of a way to define it that would make that untrue... unless you mean that there isn't a system wherein something like attention is updating/computing and the model itself chooses when to make text predictions. That's by design, but what you're arguing doesn't really follow.
Now, whether what the model is thinking about inside that attention block matches up exactly or completely with the text it's producing as generated context is probably at least a little dubious, and its unlikely to be a complete representation regardless.
> The transformer architecture absolutely keeps state information "in its head" so to speak as it produces the next word prediction, and uses that information in its compute.
We literally don't know. We don't understand how the brain stores concepts. It's not necessarily language: there are people that do not have an internal monologue, and yet they are still capable of higher level thinking.
People will go to extremely great lengths to debate the appropriate analogy for how these things work, which is fun I guess but in a "get high with a buddy" sense at least to my taste.
Some of how they work is well understood (a lot now, actually), some of the outcomes are still surprising.
But we debate both the well understood parts and the surprising parts both with the wrong terminology borrowed from pretty dubious corners of pop cognitive science, and not with terminology appropriate to the new and different thing! It's nothing like a brain, it's a new different thing. Does it think or reason? Who knows pass the blunt.
They do X performance on Y task according to Z eval, that's how you discuss ML model capability if you're persuing understanding rather than fundraising or clicks.
It’s incredible to me that so many seem to have fallen for “humans are just LLMs bruh” argument but I think I’m beginning to understand the root of the issue.
People who only “deeply” study technology only have that frame of reference to view the world so they make the mistake of assuming everything must work that way, including humans.
If they had a wider frame of reference that included, for example, Early Childhood Development, they might have enough knowledge to think outside of this box and know just how ridiculous that argument is.
I have a background in ML and work in software development, but studied experimental psych in a past life. It's actually kind of painful watching people slap phases related to cognition onto things that aren't even functionally equivalent to their namesakes, then parade them around like some kind of revelation. It's also a little surprising that there no interest (at least publicly) in using cognitive architectures in the development of AI systems.
That is an issue prevalent in the western world for the last 200 years, beginning possibly with the Industrial Revolution, probably earlier. That problem is reductionism, consequently applied down to the last level: discover the smallest element of every field of science, develop an understanding of all the parts from the smallest part upwards and develop from the understanding of the parts an understanding of the whole.
Unfortunately, this approach does not yield understanding, it yields know-how.
I'm unconvinced by the article criticism's, given they also employ their feels and few citations.
> I appreciate that research has to be done on small models, but we know that reasoning is an emergent capability! (...) Even if you grant that what they’re measuring is reasoning, I am profoundly unconvinced that their results will generalize to a 1B, 10B or 100B model.
A fundamental part of applied research is simplifying a real-world phenomenon to better understand it. Dismissing that for this many parameters, for such a simple problem, the LLM can't perform out of distribution just because it's not big enough undermines the very value of independent research. Tomorrow another model with double the parameters may or may not show the same behavior, but that finding will be built on top of this one.
Also, how do _you_ know that reasoning is emergent, and not rationalising on top of a compressed version of the web stored in 100B parameters?
"The question [whether computers can think] is just as relevant and just as meaningful as the question whether submarines can swim." -- Edsger W. Dijkstra, 24 November 1983
I don't agree with the parallel. Submarines can move through water - whether you call that swimming or not isn't an interesting question, and doesn't illuminate the function of a submarine.
With thinking or reasoning, there's not really a precise definition of what it is, but we nevertheless know that currently LLMs and machines more generally can't reproduce many of the human behaviours that we refer to as thinking.
The question of what tasks machines can currently accomplish is certainly meaningful, if not urgent, and the reason LLMs are getting so much attention now is that they're accomplishing tasks that machines previously couldn't do.
To some extent there might always remain a question about whether we call what the machine is doing "thinking" - but that's the uninteresting verbal question. To get at the meaningful questions we might need a more precise or higher resolution map of what we mean by thinking, but the crucial element is what functions a machine can perform, what tasks it can accomplish, and whether we call that "thinking" or not doesn't seem important.
Maybe that was even Dijkstra's point, but it's hard to tell without context...
Finally! A good take on that paper. I saw that arstechnica article posted everywhere, and most of the comments are full of confirmation bias, and almost all of them miss the fineprint - it was tested on a 4 layer deep toy model. It's nice to read a post that actually digs deeper and offers perspectives on what might be a good finding vs. just warranting more research.
How do you see that impacting the results? It is the same algorithm just on a smaller scale. I would assume a 4 layer model would not be very good, but does reasoning improve it? Is there a reason scale would impact the use of reasoning?
A lot of current LLM work is basically emergent behavior. They use a really simple core algorithm and scale it up, and interesting things happen. You can read some of anthropic's recent papers to see some of this, like: They didn't expect LLMs could "lookahead" when writing poetry. However, when they actually went in and watched what was happening (there's details on how this "watching" works on their blog/in their studies) they found the LLM actually was planning ahead! That's emergent behavior, they didn't design it to do that, it just started doing due to the complexity of the model.
If (BIG if) we ever do see actual AGI, it is likely to work like this. It's unlikely we're going to make AGI by designing some grand Cathedral of perfect software, it is more likely we are going to find the right simple principles to scale big enough to have AGI emerge. This is similar.
A depth of 4 is very small. It is very much a toy model. It's ok to research this, and maybe someone will try it out on larger models, but it's totally not ok to lead with the conclusion, based on this toy model, IMO.
Chain of thought is just a way of trying to squeeze more juice out of the lemon of LLM's; I suspect we're at the stage of running up against diminishing returns and we'll have to move to different foundational models to see any serious improvement.
Current thought, for me there's a lot of hand-wringing about what is "reasoning" and what isn't. But right now perhaps the question might be boiled down to -- "is the bottleneck merely hard drive space/memory/computing speed?"
I kind of feel like we won't be able to even begin to test this until a few more "Moore's law" cycles.
Mathematical reasoning does sometimes require correct calculations, and if you get them wrong your answers will be wrong. I wouldn’t want someone doing my taxes to be bad at calculation or bad at finding mistakes in calculation.
It would be interesting to see if this study’s results can be reproduced in a more realistic setting.
I feel like the fundamental concept of symbolic logic[1] as a means of reasoning fits within the capabilities of LLMs.
Whether it's a mirage or not, the ability to produce a symbolically logical result that has valuable meaning seems real enough to me.
Especially since most meaning is assigned by humans onto the world... so too can we choose to assign meaning (or not) to the output of a chain of symbolic logic processing?
Edit: maybe it is not so much that an LLM calculates/evaluates the result of symbolic logic as it is that it "follows" the pattern of logic encoded into the model.
I feel it is interesting but not what would be ideal. I really think if the models could be less linear and process over time in latent space you'd get something much more akin to thought. I've messed around with attaching reservoirs at each layer using hooks with interesting results (mainly over fitting), but it feels like such a limitation to have all model context/memory stuck as tokens when latent space is where the richer interaction lives. Would love to see more done where thought over time mattered and the model could almost mull over the question a bit before being obligated to crank out tokens. Not an easy problem, but interesting.
They're already implementing branching thought and taking the best one, eventually the entire response will be branched, with branches being spawned and culled by some metric over the lifetime of the completion. It's just not feasible now for performance reasons.
Agree! I’m not an AI engineer or researcher, but it always struck me as odd that we would serialise the 100B or whatever parameters of latent space down to maximum 1M tokens and back for every step.
>I feel it is interesting but not what would be ideal. I really think if the models could be less linear and process over time in latent space you'd get something much more akin to thought.
> Whether AI reasoning is “real” reasoning or just a mirage can be an interesting question, but it is primarily a philosophical question. It depends on having a clear definition of what “real” reasoning is, exactly.
It's pretty easy: causal reasoning. Causal, not statistic correlation only as LLM do, with or without "CoT".
Correct me if I'm wrong, I'm not sure it's so simple. LLMs are called causal models in the sense that earlier tokens "cause" later tokens, that is, later tokens are causally dependent on what the earlier tokens are.
If you mean deterministic rather than probabilistic, even Pearl-style causal models are probabilistic.
I think the author is circling around the idea that their idea of reasoning is to produce statements in a formal system: to have a set of axioms, a set of production rules, and to generate new strings/sentences/theorems using those rules. This approach is how math is formalized. It allows us to extrapolate - make new "theorems" or constructions that weren't in the "training set".
By this definition a bag of answers is causal reasoning because we previously filled the bag, which caused what we pulled. State causing a result is not causal reasoning.
You need to actually have something that deduces a result from a set of principles that form a logical conclusion or the understanding that more data is needed to make a conclusion. That is clearly different than finding a likely next token on statics alone, despite the fact the statical answer can be correct.
But let's say you change your mathematical expression by reducing or expanding it somehow, then, unless it's trivial, there are infinite ways to do it, and the "cause" here is the answer to the question of "why did you do that and not something else"? Brute force excluded, the cause is probably some idea, some model of the problem or a gut feeling (or desperation..) ..
Smoking increases the risk of getting cancer significantly. We say Smoking causes Cancer. Causal reasoning can be probabilistic.
LLMs are not causal reasoning because there are no facts, only tokens. For the most part you can't ask LLMs how they came to an answer, because it doesn't know.
This paper I read from here has an interesting mathematical model for reasoning based on cognitive science. https://arxiv.org/abs/2506.21734 (there is also code here https://github.com/sapientinc/HRM) I think we will see dramatic performance increases on "reasoning" problems when this is worked into existing AI architectures.
we should be asking if reasoning while speaking is even possible for humans. this is why we have the scientific method and that's why LLMs write and run unit tests on their reasoning. But yeah intelligence is probably in the ear of the believer.
Yes, it's a mirage, since this type of software is an opaque simulation, perhaps even a simulacra. It's reasoning in the same sense as there are terrorists in a game of Counter-Strike.
One thing that LLMs have exposed is how much of a house of cards all of our definitions of "human mind"-adjacent concepts are. We have a single example in all of reality of a being that thinks like we do, and so all of our definitions of thinking are inextricably tied with "how humans think", and now we have an entity that does things which seem to be very like how we think, but not _exactly like it_, and a lot of our definitions don't seem to work any more:
Reasoning, thinking, knowing, feeling, understanding, etc.
Or at the very least, our rubrics and heuristics for determining if someone (thing) thinks, feels, knows, etc, no longer work. And in particular, people create tests for those things thinking that they understand what they are testing for, when _most human beings_ would also fail those tests.
I think a _lot_ of really foundational work needs to be done on clearly defining a lot of these terms and putting them on a sounder basis before we can really move forward on saying whether machines can do those things.
I agree 100% with you. I'm most excited about LLMs because they seem to capture at least some aspect of intelligence, and that's amazing given how much long it took to get here. It's exciting that we don't just understand it.
I see people say, "LLMs aren't human intelligence", but instead, I really feel that it shows that many people, and much of what we do, probably is like an LLM. Most people just hallucinate their way through a conversation, they certainly don't reason. Reasoning is incredibly rare.
This is an obnoxious response. Of course I recognize that philosophy is the solution to this. What I am pointing out is that philosophy has not as of yet resolved these relatively new problems. The idea that non-human intelligences might exist is of course an old one, but that is different from having an actual (potentially) existing one to reckon with.
> Writings on metacognition date back at least as far as two works by the Greek philosopher Aristotle (384–322 BC): On the Soul and the Parva Naturalia
We built a box that spits out natural language and tricks humans into believing it's conscious. The box itself actually isn't that interesting, but the human side of the equation is.
Non-human intelligences have always existed in the form of animals.
Animals do not have spoken language the way humans do, so their thoughts aren’t really composed of sentences. Yet, they have intelligence and can reason about their world.
How could we build an AGI that doesn’t use language to think at all? We have no fucking clue and won’t for a while because everyone is chasing the mirage created by LLMs. AI winter will come and we’ll sit around waiting for the next big innovation. Probably some universal GOAP with deeply recurrent neural nets.
Yes I very much understand that most people do not think that LLMs think or understand like we do, but it is _very difficult_ to prove that that is the case, using any test which does not also exclude a great deal of people. And that is because "thinking like we do" is not at all a well-defined concept.
And why should you not exclude them. Where does this idea come from, taking random elements as models. Where do you see pedestals of free access? Is the Nobel Prize a raffle now?
I mostly agree with the point the author makes that "it doesn't matter". But then again, it does matter, because LLM-based products are marketed based on "IT CAN REASON!" And so, while it may not matter, per se, how an LLM comes up with its results, to the extent that people choose to rely on LLMs because of marketing pitches, it's worth pushing back on those claims if they are overblown, using the same frame that the marketers use.
That said, this author says this question of whether models "can reason" is the least interesting thing to ask. But I think the least interesting thing you can do is to go around taking every complaint about LLM performance and saying "but humans do the exact same thing!" Which is often not true, but again, doesn't matter.
> Because reasoning tasks require choosing between several different options. “A B C D [M1] -> B C D E” isn’t reasoning, it’s computation, because it has no mechanism for thinking “oh, I went down the wrong track, let me try something else”. That’s why the most important token in AI reasoning models is “Wait”. In fact, you can control how long a reasoning model thinks by arbitrarily appending “Wait” to the chain-of-thought. Actual reasoning models change direction all the time, but this paper’s toy example is structurally incapable of it.
I think this is the most important critique that undercuts the paper's claims. I'm less convinced by the other point. I think backtracking and/or parallel search is something future papers should definitely look at in smaller models.
The article is definitely also correct on the overreaching, broad philosophical claims that seems common when discussing AI and reasoning.
> The first is that reasoning probably requires language use. Even if you don’t think AI models can “really” reason - more on that later - even simulated reasoning has to be reasoning in human language.
That is an unreasonable assumption. In case of LLMs it seems wasteful to transform a point from latent space into a random token and lose information. In fact, I think in near future it will be the norm for MLLMs to "think" and "reason" without outputting a single "word".
> Whether AI reasoning is “real” reasoning or just a mirage can be an interesting question, but it is primarily a philosophical question. It depends on having a clear definition of what “real” reasoning is, exactly.
It is not a "philosophical" (by which the author probably meant "practically inconsequential") question. If the whole reasoning business is just rationalization of pre-computed answers or simply a means to do some computations because every token provides only a fixed amount of computation to update the model's state, then it doesn't make much sense to focus on improving the quality of chain-of-thought output from human POV.
I'm pretty much a layperson in this field, but I don't understand why we're trying to teach a stochastic text transformer to reason. Why would anyone expect that approach to work?
I would have thought the more obvious approach would be to couple it to some kind of symbolic logic engine. It might transform plain language statements into fragments conforming to a syntax which that engine could then parse deterministically. This is the Platonic ideal of reasoning that the author of the post pooh-poohs, I guess, but it seems to me to be the whole point of reasoning; reasoning is the application of logic in evaluating a proposition. The LLM might be trained to generate elements of the proposition, but it's too random to apply logic.
Problem is, even with symbolic logic, reasoning is not completely deterministic. Whether one can get to a set of given axioms from a given proposition is sometimes undecidable.
Solutions to some of the hardest problems I've had have only come after a night of sleep or when I'm out on a walk and I'm not even thinking about the problem. Maybe what my brain was doing was something different from reasoning?
This is a very important point and mostly absent from the conversation.
We have many words that almost mean the same thing or can mean ment different things - and conversations about intelligence and consciousness are riddled with them.
Not all reasoning requires language. Symbolic reasoning uses language.
Real-time spatial reasoning like driving a car and not hitting things does not seem linguistic.
Figuring out how to rotate a cabinet so that it will clear through a stairwell also doesn't seem like it requires language, only to communicate the solution to someone else (where language can turn into a hindrance, compared to a diagram or model).
Pivot!
[dead]
When Using AI they say "Context is King". "Reasoning" models are using the AI to generate context. They are not reasoning in the sense of logic, or philosophy. Mirage, whatever you want to call it, it is rather unlike what people mean when they use the term reasoning. Calling it reasoning is up there with calling generating out put people don't like hallucinations.
You are making the same mistake OP is calling out. As far as I can tell “generating context” is exactly what human reasoning is too. Consider the phrase “let’s reason this out” where you then explore all options in detail, before pronouncing your judgement. Feels exactly like what the AI reasoner is doing.
"let's reason this out" is about gathering all the facts you need, not just noting down random words that are related. The map is not the terrain, words are not facts.
Have you read the chain of thought output from reasoning models? That's not what it does.
Reasoning is also about processing facts.
Performance is proportional to the number of reasoning tokens. How to reconcile that with your opinion that they are "random words"?
s/random/statistically-likely/g
Reducing the distance of each statistical leap improves “performance” since you would avoid failure modes that are specific to the largest statistical leaps, but it doesn’t change the underlying mechanism. Reasoning models still “hallucinate” spectacularly even with “shorter” gaps.
What's wrong with statistically likely?
If I ask you what's 2+2, there's a single answer I consider much more likely than others.
Sometimes, words are likely because they are grounded in ideas and facts they represent.
> Sometimes, words are likely because they are grounded in ideas and facts they represent.
Yes, and other times they are not. I think the failure modes of a statistical model of a communicative model of thought are unintuitive enough without any added layers of anthropomorphization, so there remains some value in pointing it out.
Technically random can have probabilities associated with them.. Casual speech, random means equal probabilities, or we don’t know the probabilities. But for LLM token output, it does estimate the probabilities.
No, people make logical connections, make inferences, make sure all of it fits together without logical errors, etc.
These people you're talking about must be rare online, as human communication is pretty rife with logical errors.
Since that November in which this technology boomed we have been much too often reading "people also drink from puddles", as if it were standard practice.
That we implement skills, not deficiencies, is a basic concept that is getting to such a level of needed visibility it should probably be inserted in the guidelines.
We implement skills, not deficiencies.
You shouldn’t be basing your entire worldview around the lowest common denominator. All kinds of writers like blog writers, novelists, scriptwriters, technical writers, academics, poets, lawyers, philosophers, mathematicians, and even teenage fan fiction writers do what I said above routinely.
>As far as I can tell “generating context” is exactly what human reasoning is too.
This was the view of Hume (humans as bundles of experience who just collect information and make educated guesses for everything). Unfortunately, it leads to philosophical skepticism, in which you can't ground any knowledge absolutely, as it's all just justified by some knowledge you got from someone else, which also came from someone else, etc., and eventually you can't actually justify any knowledge that isn't directly a result of experience (the concept of "every effect has a cause" is a classic example).
There have been plenty of epistemological responses to this viewpoint, with Kant's view, of humans doing a mix of "gathering context" (using our senses) but also applying universal categorical reasoning to schematize and understand / reason from the objects we sense, being the most well known.
I feel like anyone talking about the epistemology of AI should spend some time reading the basics of all of the thought from the greatest thinkers on the subject in history...
Feels like, but isn't. When you are reasoning things out, there is a brain with state that is actively modeling the problem. AI does no such thing, it produces text and then uses that text to condition the next text. If it isn't written, it does not exist.
Put another way, LLMs are good at talking like they are thinking. That can get you pretty far, but it is not reasoning.
The transformer architecture absolutely keeps state information "in its head" so to speak as it produces the next word prediction, and uses that information in its compute.
It's true that if it's not producing text, there is no thinking involved, but it is absolutely NOT clear that the attention block isn't holding state and modeling something as it works to produce text predictions. In fact, I can't think of a way to define it that would make that untrue... unless you mean that there isn't a system wherein something like attention is updating/computing and the model itself chooses when to make text predictions. That's by design, but what you're arguing doesn't really follow.
Now, whether what the model is thinking about inside that attention block matches up exactly or completely with the text it's producing as generated context is probably at least a little dubious, and its unlikely to be a complete representation regardless.
> The transformer architecture absolutely keeps state information "in its head" so to speak as it produces the next word prediction, and uses that information in its compute.
How so? Transformers are state space models.
So exactly what language/paradigm is this brain modeling the problem within?
We literally don't know. We don't understand how the brain stores concepts. It's not necessarily language: there are people that do not have an internal monologue, and yet they are still capable of higher level thinking.
Rilke: "There is a depth of thought untouched by words, and deeper still a depth of formless feeling untouched by thought."
Perhaps we can find some objective means to decide, rather than go with what "feels" correct
But a big point here becomes whether the generated "context" then receives proper processing.
People will go to extremely great lengths to debate the appropriate analogy for how these things work, which is fun I guess but in a "get high with a buddy" sense at least to my taste.
Some of how they work is well understood (a lot now, actually), some of the outcomes are still surprising.
But we debate both the well understood parts and the surprising parts both with the wrong terminology borrowed from pretty dubious corners of pop cognitive science, and not with terminology appropriate to the new and different thing! It's nothing like a brain, it's a new different thing. Does it think or reason? Who knows pass the blunt.
They do X performance on Y task according to Z eval, that's how you discuss ML model capability if you're persuing understanding rather than fundraising or clicks.
It’s incredible to me that so many seem to have fallen for “humans are just LLMs bruh” argument but I think I’m beginning to understand the root of the issue.
People who only “deeply” study technology only have that frame of reference to view the world so they make the mistake of assuming everything must work that way, including humans.
If they had a wider frame of reference that included, for example, Early Childhood Development, they might have enough knowledge to think outside of this box and know just how ridiculous that argument is.
I have a background in ML and work in software development, but studied experimental psych in a past life. It's actually kind of painful watching people slap phases related to cognition onto things that aren't even functionally equivalent to their namesakes, then parade them around like some kind of revelation. It's also a little surprising that there no interest (at least publicly) in using cognitive architectures in the development of AI systems.
That is an issue prevalent in the western world for the last 200 years, beginning possibly with the Industrial Revolution, probably earlier. That problem is reductionism, consequently applied down to the last level: discover the smallest element of every field of science, develop an understanding of all the parts from the smallest part upwards and develop from the understanding of the parts an understanding of the whole.
Unfortunately, this approach does not yield understanding, it yields know-how.
They should call them Fuzzing models. They're just running through varioous iterations of the context until they hit a token that trips them out.
And yet it improves their problem solving ability.
I'm unconvinced by the article criticism's, given they also employ their feels and few citations.
> I appreciate that research has to be done on small models, but we know that reasoning is an emergent capability! (...) Even if you grant that what they’re measuring is reasoning, I am profoundly unconvinced that their results will generalize to a 1B, 10B or 100B model.
A fundamental part of applied research is simplifying a real-world phenomenon to better understand it. Dismissing that for this many parameters, for such a simple problem, the LLM can't perform out of distribution just because it's not big enough undermines the very value of independent research. Tomorrow another model with double the parameters may or may not show the same behavior, but that finding will be built on top of this one.
Also, how do _you_ know that reasoning is emergent, and not rationalising on top of a compressed version of the web stored in 100B parameters?
I think that when you are arguing logic and reason with a group who became really attached to the term vibe-coding you've likely already lost.
"The question [whether computers can think] is just as relevant and just as meaningful as the question whether submarines can swim." -- Edsger W. Dijkstra, 24 November 1983
I don't agree with the parallel. Submarines can move through water - whether you call that swimming or not isn't an interesting question, and doesn't illuminate the function of a submarine.
With thinking or reasoning, there's not really a precise definition of what it is, but we nevertheless know that currently LLMs and machines more generally can't reproduce many of the human behaviours that we refer to as thinking.
The question of what tasks machines can currently accomplish is certainly meaningful, if not urgent, and the reason LLMs are getting so much attention now is that they're accomplishing tasks that machines previously couldn't do.
To some extent there might always remain a question about whether we call what the machine is doing "thinking" - but that's the uninteresting verbal question. To get at the meaningful questions we might need a more precise or higher resolution map of what we mean by thinking, but the crucial element is what functions a machine can perform, what tasks it can accomplish, and whether we call that "thinking" or not doesn't seem important.
Maybe that was even Dijkstra's point, but it's hard to tell without context...
It is strange that you started your comment with "I don't agree". The rest of the comment demonstrates that you do agree.
https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD898... provides the context. I haven't re-read it in the last month, but I'm pretty sure you've correctly identified Dijkstra's point.
But the topic here is whether some techniques are progressive or not
(with a curious parallel about whether some paths in thought are dead-ends - the unproductive focus mentioned in the article).
[dead]
Finally! A good take on that paper. I saw that arstechnica article posted everywhere, and most of the comments are full of confirmation bias, and almost all of them miss the fineprint - it was tested on a 4 layer deep toy model. It's nice to read a post that actually digs deeper and offers perspectives on what might be a good finding vs. just warranting more research.
> it was tested on a 4 layer deep toy model
How do you see that impacting the results? It is the same algorithm just on a smaller scale. I would assume a 4 layer model would not be very good, but does reasoning improve it? Is there a reason scale would impact the use of reasoning?
A lot of current LLM work is basically emergent behavior. They use a really simple core algorithm and scale it up, and interesting things happen. You can read some of anthropic's recent papers to see some of this, like: They didn't expect LLMs could "lookahead" when writing poetry. However, when they actually went in and watched what was happening (there's details on how this "watching" works on their blog/in their studies) they found the LLM actually was planning ahead! That's emergent behavior, they didn't design it to do that, it just started doing due to the complexity of the model.
If (BIG if) we ever do see actual AGI, it is likely to work like this. It's unlikely we're going to make AGI by designing some grand Cathedral of perfect software, it is more likely we are going to find the right simple principles to scale big enough to have AGI emerge. This is similar.
[dead]
There's prior research that finds a connection between model depth and "reasoning" ability - https://arxiv.org/abs/2503.03961
A depth of 4 is very small. It is very much a toy model. It's ok to research this, and maybe someone will try it out on larger models, but it's totally not ok to lead with the conclusion, based on this toy model, IMO.
Human babies are the same algorithm as adults.
Chain of thought is just a way of trying to squeeze more juice out of the lemon of LLM's; I suspect we're at the stage of running up against diminishing returns and we'll have to move to different foundational models to see any serious improvement.
Current thought, for me there's a lot of hand-wringing about what is "reasoning" and what isn't. But right now perhaps the question might be boiled down to -- "is the bottleneck merely hard drive space/memory/computing speed?"
I kind of feel like we won't be able to even begin to test this until a few more "Moore's law" cycles.
Mathematical reasoning does sometimes require correct calculations, and if you get them wrong your answers will be wrong. I wouldn’t want someone doing my taxes to be bad at calculation or bad at finding mistakes in calculation.
It would be interesting to see if this study’s results can be reproduced in a more realistic setting.
I feel like the fundamental concept of symbolic logic[1] as a means of reasoning fits within the capabilities of LLMs.
Whether it's a mirage or not, the ability to produce a symbolically logical result that has valuable meaning seems real enough to me.
Especially since most meaning is assigned by humans onto the world... so too can we choose to assign meaning (or not) to the output of a chain of symbolic logic processing?
Edit: maybe it is not so much that an LLM calculates/evaluates the result of symbolic logic as it is that it "follows" the pattern of logic encoded into the model.
[1] https://en.wikipedia.org/wiki/Logic
I feel it is interesting but not what would be ideal. I really think if the models could be less linear and process over time in latent space you'd get something much more akin to thought. I've messed around with attaching reservoirs at each layer using hooks with interesting results (mainly over fitting), but it feels like such a limitation to have all model context/memory stuck as tokens when latent space is where the richer interaction lives. Would love to see more done where thought over time mattered and the model could almost mull over the question a bit before being obligated to crank out tokens. Not an easy problem, but interesting.
They're already implementing branching thought and taking the best one, eventually the entire response will be branched, with branches being spawned and culled by some metric over the lifetime of the completion. It's just not feasible now for performance reasons.
Agree! I’m not an AI engineer or researcher, but it always struck me as odd that we would serialise the 100B or whatever parameters of latent space down to maximum 1M tokens and back for every step.
>I feel it is interesting but not what would be ideal. I really think if the models could be less linear and process over time in latent space you'd get something much more akin to thought.
Please stop, this is how you get AI takeovers.
Citation seriously needed.
[dead]
Mostly. It gives language models the way to dynamically allocate computation time, but the models are still fundamentally imitative.
> reasoning probably requires language use
The author has a curious idea of what "reasoning" entails.
> Whether AI reasoning is “real” reasoning or just a mirage can be an interesting question, but it is primarily a philosophical question. It depends on having a clear definition of what “real” reasoning is, exactly.
It's pretty easy: causal reasoning. Causal, not statistic correlation only as LLM do, with or without "CoT".
Correct me if I'm wrong, I'm not sure it's so simple. LLMs are called causal models in the sense that earlier tokens "cause" later tokens, that is, later tokens are causally dependent on what the earlier tokens are.
If you mean deterministic rather than probabilistic, even Pearl-style causal models are probabilistic.
I think the author is circling around the idea that their idea of reasoning is to produce statements in a formal system: to have a set of axioms, a set of production rules, and to generate new strings/sentences/theorems using those rules. This approach is how math is formalized. It allows us to extrapolate - make new "theorems" or constructions that weren't in the "training set".
By this definition a bag of answers is causal reasoning because we previously filled the bag, which caused what we pulled. State causing a result is not causal reasoning.
You need to actually have something that deduces a result from a set of principles that form a logical conclusion or the understanding that more data is needed to make a conclusion. That is clearly different than finding a likely next token on statics alone, despite the fact the statical answer can be correct.
But let's say you change your mathematical expression by reducing or expanding it somehow, then, unless it's trivial, there are infinite ways to do it, and the "cause" here is the answer to the question of "why did you do that and not something else"? Brute force excluded, the cause is probably some idea, some model of the problem or a gut feeling (or desperation..) ..
Smoking increases the risk of getting cancer significantly. We say Smoking causes Cancer. Causal reasoning can be probabilistic.
LLMs are not causal reasoning because there are no facts, only tokens. For the most part you can't ask LLMs how they came to an answer, because it doesn't know.
What's stopping us from building an LLM that can build causal trees, rejecting some trees and accepting others based on whatever evidence it is fed?
Or even a causal tool for an LLM agent that operates like what it does when you ask it about math and forwards the request to Wolfram.
>What's stopping us from building an LLM that can build causal trees, rejecting some trees and accepting others based on whatever evidence it is fed?
Exponential time complexity.
> causal reasoning
You have missed the foundation: before dynamics, being. Before causal reasoning you have deep definition of concepts. Causality is "below" that.
Define causal reasoning?
This paper I read from here has an interesting mathematical model for reasoning based on cognitive science. https://arxiv.org/abs/2506.21734 (there is also code here https://github.com/sapientinc/HRM) I think we will see dramatic performance increases on "reasoning" problems when this is worked into existing AI architectures.
we should be asking if reasoning while speaking is even possible for humans. this is why we have the scientific method and that's why LLMs write and run unit tests on their reasoning. But yeah intelligence is probably in the ear of the believer.
Yes, it's a mirage, since this type of software is an opaque simulation, perhaps even a simulacra. It's reasoning in the same sense as there are terrorists in a game of Counter-Strike.
One thing that LLMs have exposed is how much of a house of cards all of our definitions of "human mind"-adjacent concepts are. We have a single example in all of reality of a being that thinks like we do, and so all of our definitions of thinking are inextricably tied with "how humans think", and now we have an entity that does things which seem to be very like how we think, but not _exactly like it_, and a lot of our definitions don't seem to work any more:
Reasoning, thinking, knowing, feeling, understanding, etc.
Or at the very least, our rubrics and heuristics for determining if someone (thing) thinks, feels, knows, etc, no longer work. And in particular, people create tests for those things thinking that they understand what they are testing for, when _most human beings_ would also fail those tests.
I think a _lot_ of really foundational work needs to be done on clearly defining a lot of these terms and putting them on a sounder basis before we can really move forward on saying whether machines can do those things.
I agree 100% with you. I'm most excited about LLMs because they seem to capture at least some aspect of intelligence, and that's amazing given how much long it took to get here. It's exciting that we don't just understand it.
I see people say, "LLMs aren't human intelligence", but instead, I really feel that it shows that many people, and much of what we do, probably is like an LLM. Most people just hallucinate their way through a conversation, they certainly don't reason. Reasoning is incredibly rare.
Congratulations, you've invented philosophy.
We need to reinvent philosophy. With JSON this time.
This is an obnoxious response. Of course I recognize that philosophy is the solution to this. What I am pointing out is that philosophy has not as of yet resolved these relatively new problems. The idea that non-human intelligences might exist is of course an old one, but that is different from having an actual (potentially) existing one to reckon with.
> Writings on metacognition date back at least as far as two works by the Greek philosopher Aristotle (384–322 BC): On the Soul and the Parva Naturalia
We built a box that spits out natural language and tricks humans into believing it's conscious. The box itself actually isn't that interesting, but the human side of the equation is.
> the human side of the equation is
You have only proven the urgency of Intelligence, the need to produce it in inflationary amounts.
These are not new problems though.
Non-human intelligences have always existed in the form of animals.
Animals do not have spoken language the way humans do, so their thoughts aren’t really composed of sentences. Yet, they have intelligence and can reason about their world.
How could we build an AGI that doesn’t use language to think at all? We have no fucking clue and won’t for a while because everyone is chasing the mirage created by LLMs. AI winter will come and we’ll sit around waiting for the next big innovation. Probably some universal GOAP with deeply recurrent neural nets.
> which seem to be very like how we think
I would like to reassure you that we - we here - see LLMs are very much unlike us.
Yes I very much understand that most people do not think that LLMs think or understand like we do, but it is _very difficult_ to prove that that is the case, using any test which does not also exclude a great deal of people. And that is because "thinking like we do" is not at all a well-defined concept.
> exclude a great deal of people
And why should you not exclude them. Where does this idea come from, taking random elements as models. Where do you see pedestals of free access? Is the Nobel Prize a raffle now?
I mostly agree with the point the author makes that "it doesn't matter". But then again, it does matter, because LLM-based products are marketed based on "IT CAN REASON!" And so, while it may not matter, per se, how an LLM comes up with its results, to the extent that people choose to rely on LLMs because of marketing pitches, it's worth pushing back on those claims if they are overblown, using the same frame that the marketers use.
That said, this author says this question of whether models "can reason" is the least interesting thing to ask. But I think the least interesting thing you can do is to go around taking every complaint about LLM performance and saying "but humans do the exact same thing!" Which is often not true, but again, doesn't matter.
> Because reasoning tasks require choosing between several different options. “A B C D [M1] -> B C D E” isn’t reasoning, it’s computation, because it has no mechanism for thinking “oh, I went down the wrong track, let me try something else”. That’s why the most important token in AI reasoning models is “Wait”. In fact, you can control how long a reasoning model thinks by arbitrarily appending “Wait” to the chain-of-thought. Actual reasoning models change direction all the time, but this paper’s toy example is structurally incapable of it.
I think this is the most important critique that undercuts the paper's claims. I'm less convinced by the other point. I think backtracking and/or parallel search is something future papers should definitely look at in smaller models.
The article is definitely also correct on the overreaching, broad philosophical claims that seems common when discussing AI and reasoning.
Betteridge's Law of Headlines.
https://en.m.wikipedia.org/wiki/Betteridge's_law_of_headline...
Betteridge's law applies to editors adding question marks to cover-the-ass of articles with weak claims, not bloggers begging questions.
[dead]