kelseyfrog 2 days ago

The cybernetic psychology idea of comparing sensed vs. desired states maps cleanly onto a PID controller: P reacts to the present error, I accumulates past errors, and D anticipates future ones.

It's not just a comparator, it’s how long I’ve been off (I), how fast I’m drifting (D), and how far I am right now (P).

In this framework, emotional regulation looks like control theory. Anxiety isn't just a feeling—it's high D-gain ie: a system overreacting to projected errors.

Depression? Low P (blunted response), high I (burden of unresolved past errors), and broken D (no expected future improvement).

Mania? Cranked P and D, and I disabled.

In addition to personality being setpoints, our perceptions of the past, present, and future might just be PID parameters. What we call "disorders" are oscillations, deadzones, or gain mismatch. But like the article pointed out, it's not really a scientific theory unless it's falsifiable.

  • rini17 a day ago

    But as recent AI advancement hints at, these states are highly manydimensional. And in such spaces our intuitions fall apart. Even simple gradient descent works quite differently when you have millions of almost-cardinal directions to pick at any point, and PID regulation is even more complex.

    And even in 3D the plain PID just can't do when the space is discrete or when there are large signal delays relative to response time of the system. We don't say "oh it's got anxiety" lol but we replace it with updated algorithm.

    • kelseyfrog a day ago

      That sounds like a great research question, "Given this model[1], what's the dimensionality of the space?"

      1. Or more accurately, a model in this family

  • hnuser123456 19 hours ago

    This is tempting, because we have a tight grip of control theory. However, there are thousands of things you could measure, and picking one at a time to improve could be catastrophic. Give a severely depressed person a big bump in motivation and nothing else, and it might be their end. We'll need to pick a collection of probably at least 100 measurements of various personality facets, try different therapies and medicines, and see which ones reduce the average error the most for different starting conditions. And do this consistently. Then, maybe we can say things like "given your scenario, treatment X will improve your overall condition the most but may leave these facets suboptimal, and treatment Y will provide a slightly lower overall improvement but does better for these facets you were most concerned about" with actual mathematical confidence.

  • p_v_doom a day ago

    If we want to be pedantic both go back to cybernetics and the idea of the cybernetic loop.

vanderZwan a day ago

In short, the brain is a complex dynamic feedback system and cybernetics is a really good way to break down complex dynamic feedback systems? Seems like an obviously sensible take.

One thing I'm missing here is the idea of context.

To give a very concrete example from personal experience: I'm colorblind (protanomaly, meaning one of my color cones is miscalibrated, which can actually be partially "corrected" for by manipulating the input spectrum a.k.a. wearing specifically shaded glasses). As a result I've learned a lot about color theory over the years.

One fun thing is that if you look at the history of color perception theories, once we realized what the eye is made of we started with coming up with ever more sophisticated models for each of our cones for a while, which was a great improvement... and then someone realized "hey, the way I perceive this shade of red also depends on which color it is surrounded by", and if you want to model that accurately then things get really complicated (it's also usually not needed for accurately storing and then later reproducing colors). But it explains things like a question of "what color is the dress?" taking the internet by storm.

So if we look at the "control systems" of the brain, I bet those "target values" will be a combination of static baseline plus contextual correction from other control systems. And teasing that apart will be quite a task.

Animats a day ago

That's a book review. Read the actual book.[1]

Notes:

- Prologue:

(Behaviorism) ended up being a terrible way to do psychology, but it was admirable for being an attempt at describing the whole business in terms of a few simple entities and rules. It was precise enough to be wrong, rather than vague to the point of being unassailable, which has been the rule in most of psychology.

- Thermostat:

An intro to control theory, but one which ignores stability. Maxwell's original paper, "On Governors", (1868) is still worth reading. He didn't just discover electromagnetics, he founded control theory. Has the usual problems with applying this to emotions, and the author realizes this.

OK, so living things have a lot of feedback control systems. This is not a new observation. The biological term is "homeostasis", a concept apparently first described in 1849 and named in 1926. (There are claims that this concept dates from Aristotle, who wrote about "habit", but Aristotle didn't really get feedback control. Too early.)

- Motivation:

Pick goals with highest need level, but have some hysteresis to avoid toggling between behaviors too fast.

- Conflict and oscillation:

Author discovers oscillation and stability in feedback systems.

- What is going on?

Author tries to derive control theory.

- Interlude

Norbert Wiener and cybernetics, which was peak fascination with feedback in the 1950s.

- Artificial intelligence

"But humans and all other biological intelligences are cybernetic minimizers, not reward maximizers. We track multiple error signals and try to reduce them to zero. If all our errors are at zero — if you’re on the beach in Tahiti, a drink in your hand, air and water both the perfect temperature — we are mostly comfortable to lounge around on our chaise. As a result, it’s not actually clear if it’s possible to build a maximizing intelligence. The only intelligences that exist are minimizing. There has never been a truly intelligent reward maximizer (if there had, we would likely all be dead), so there is no proof of concept. The main reason to suspect AI is possible is that natural intelligence already exists — us."

Hm. That's worth some thought. An argument against it is that there are clearly people driven by the desire for "more", with no visible upper bound.

- Animal welfare

Finally, "consciousness". It speaks well of the author that it took this long to bring that up. It's brought up in the context of whether animals are conscious, and, if so, which animals.

- Dynamic methods

Failure modes of multiple feedback systems, plus some pop psychology.

- Other methods

Much like the previous chapter

- Help wanted

"If the proposal is more or less right, then this is the start of a scientific revolution."

Not seeing the revolution here. Most of the ideas here have been seen before. Did I miss something?

Feedback is important, but the author doesn't seem to have done enough of it to have a good understanding.

If you want an intuitive grasp of feedback, play with some op amps set up as an analog computer and watch the output on a scope. Or find a simulator. If The Analog Thing came with a scope (which, at its price point, it should) that would be ideal. Watch control loops with feedback and delay stabilize, oscillate, or limit. There are browser-based tools which do this, but they assume basic electrical engineering knowledge.

[1] https://slimemoldtimemold.com/2025/02/06/the-mind-in-the-whe...

  • pointlessone a day ago

    > An argument against it is that there are clearly people driven by the desire for "more", with no visible upper bound.

    At the moment it's unknown whether it's a normal function of the system or a result of miscalibration/malfunction (assuming the theory is correct). Existence of these people is only an evidence for the capacity of the system, not a description of the nominal function.

  • amatic a day ago

    > Not seeing the revolution here. Most of the ideas here have been seen before. Did I miss something?

    The reviewer is a psychologist, with some interesting opinions and criticisms of psychology. My impression is that applying control theory to study human behavior should be the revolutionary thing, for psychology.

    • Animats a day ago

      This is not new ground. See Cybernetics: Or Control_and Communication in the Animal and the Machine (1948) by Norbert Wiener. Wiener wrote a popular version, "The Human Use of Human Beings".[2] There's a whole history of cybernetics as a field. This Wikipedia article has a good summary.[3] The beginnings of neural network work came from cybernetics. As with much of philosophy, areas in which someone got results split off to become fields of their own.

      [1] https://en.wikipedia.org/wiki/Cybernetics:_Or_Control_and_Co...

      [2] https://en.wikipedia.org/wiki/The_Human_Use_of_Human_Beings

      [3] https://en.wikipedia.org/wiki/Cybernetics

      • amatic a day ago

        > This is not new ground. See Cybernetics

        Control theory and cybernetics were supposed to transform psychology in a much more dramatic and all-encompassing way, as argued by W.T. Powers, for example[1]. In modern psychology, the concept of negative feedback control is treated like a metaphore, a vague connection between machines and living things (with the possible exception of the field of motor control) . If psychology would take the concept seriously, then most research methods in the field would need to be changed. Less null-hypothesis testing, more experiments applying disturbances to selected variables to see if they are controlled by a participant or not. That is the meaning I'm getting from the call to revolution.

        [1] https://www.iapct.org/wp-content/uploads/2022/12/Powers1978....

        • Animats 20 hours ago

          Ah. The linked paper goes into that in more detail.

          This was a hot idea right after WWII because servomechanisms were finally working. In movies of early WWII naval gunnery, you see people turning cranks to get two arrows on a a dial to match. By late WWII, that's become automatic. Anti-aircraft guns are hitting the target more of the time. Early war air gunner training.[1] Late war air gunner training - the computer does the hard part.[2] Never before had that much mechanized feedback smarts been applied to tough real-world problems.

          This sort of thing generated early AI enthusiasm. Machines can think! AGI Real Soon Now! Hence the "cybernetics" movement. That lasted about a decade. They needed another nine orders of magnitude of compute power. Psychology picked up on this concept, but didn't do much with it.

          Looks like it's coming around again.

          [1] https://www.youtube.com/watch?v=DWYqu1Il9Ps

          [2] https://www.youtube.com/watch?v=mJExsIp4yO8

  • MarkusQ a day ago

    > An argument against it is that there are clearly people driven by the desire for "more", with no visible upper bound.

    The real problem is that the distinction is meaningless. These people could just be described as "minimizing the risk of running out of money" (or paperclips, or whatever). Any "maximizing system" is isomorphic with a "minimizing system" using the inverse of the metric.

dogleash 2 days ago

The control loop metaphor was an interesting idea until I started worrying halfway through that it would would be operating on something with all the same categorical problems as the impressionistic style derided the start.

The sensations we tie to urination or breathing have quick cycle times making it easy to test the causal loop. Thus confounding factors such as a UTI causing a bladder full sensation, or nitrogen asphyxiation without feeling suffocation are things we understand well.

The "Make Sure You Spend Time with Other People System" is a good example for a blog post but it’s already a fair bit looser. But when you consider they want to investigate things without preexisting understanding as well as we understand loneliness it smells like sneaking back towards tautologically defined systems like "zest for life."

ebolyen 2 days ago

If anyone is interested in a more formal descriptions of these control-loops, with more testable mechanisms, check out the concept of reward-taxis. Here are two neat papers that I think are more closely related than might initially appear:

"Is Human Behavior Just Running and Tumbling?": https://osf.io/preprints/psyarxiv/wzvn9_v1 (This used to be a blog post, but its down, so here's a essentially identical preprint.) A scale-invariant control-loop such as chemotaxis may still be the root algorithm we use, just adjusted for a dopamine gradient mediated by the prefrontal cortex.

"Give-up-itis: Neuropathology of extremis": https://www.sciencedirect.com/science/article/abs/pii/S03069... What happens when that dopamine gradient shuts down?

AndrewKemendo a day ago

I’m very excited to see that cybernetic thinking is finding its way into a more mainstream audience as people are understanding the generality of computing systems in the longest term I have not read the book in question yet so I will reserve judgment however based on this post it would seem that the majority of its claims have already been captured in much more specific detail in the 2018 Sutton and Barto reinforcement learning textbook (1).

Specifically, if you look through section 3 chapters 14 and 15, they specifically address the psychology and neuroscience congruence with the TD(L) algorithms and Bellman reward updating to solve the HMM and markov decision process generally.

The newest TD Lambda paper from this year goes even further link: https://arxiv.org/abs/2410.14606

Exciting times ahead

1 -https://archive.org/details/rlbook2018

agos 2 days ago

> Imagine how hopeless we would be if we approached medicine this way, lumping together both Black Lung and the common cold as “coughing diseases”, even though the treatment for one of them is “bed rest and fluids” and the treatment for the other one is “get a different job”.

Well, this is definitely happening for some parts of medicine, like IBS or many forms of chronic pain.

> If you feel like you’re drowning, your Oxygen Governor is like “I GIVE THIS A -1000!”. When you can breathe again, though, maybe you only get the -1000 to go away, and you don’t get any happiness on top of that. You feel much better than you did before, but you don’t feel good.

Anecdote but: you absolutely feel good. At least, I did.

  • vanderZwan a day ago

    I was going to say the same thing: medicine does this all the time. The crucial thing to keep in mind though is that it usually has a decent awareness of the difference between when it has a decent working model of underlying causes, or if it is currently stuck at a "least-terrible reverse-engineered behavioral rule of thumb" stage of understanding. The latter can still often be better than doing nothing, and what kind of doctor would choose to not help their patient just because they don't understand why the cure works most of the time?

    > Anecdote but: you absolutely feel good. At least, I did.

    I don't know you but glad you're still with us, first of all. Anyway, I think the author was trying to say that the feeling of relief doesn't typically make one seek out the experience of drowning again. Although benign masochism exists of course (which is the technical term for the way people can learn to enjoy negative emotions and experiences). Which probably would be an interesting thing to explore using this cybernetics approach.

kogir 2 days ago

  Another example for all you computer folks out there: ultimately, all software
  engineering is just moving electrons around. But imagine how hard your job would
  be if you could only talk about electrons moving around. No arrays, stacks,
  nodes, graphs, algorithms—just those lil negatively charged bois and their 
  comings and goings.
I think this too easily skips over the fact that the abstractions are based on a knowledge of how things actually work - known with certainty. Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works. When people don't know how the computer actually works, their code is wrong - they get bugs and vulnerabilities they don't understand and can't explain.
  • gchamonlive 2 days ago

    > When people don't know how the computer actually works, their code is wrong - they get bugs and vulnerabilities they don't understand and can't explain.

    While this is true, we're usually targeting a platform, either x86 or arm64, that are incredibly complex pieces of engineering. Unless you are in the IoT or your application requires you to optimize at the hardware level, we're so distant from the hardware when we're programming in python for instance that the level of awareness required about the hardware isn't that much more complicated than the basic Turing machine.

    • quadhome a day ago

      Physical latencies in distributed systems design. Calibration for input devices. Block storage failures and RAID in general. Monitor refresh rates. Almost everything about audio. Rowhammer.

  • manugo4 2 days ago

    No, red is an abstraction that is not based on knowledge of how colors work.

    • ysofunny 2 days ago

      it is an abstraction based on how our biological eyes work (this implies "knowledge" of physics)

      so it is indirectly based on knowledge of how color works, it's simply not physics as we understand it but it's "physics" as the biology of the eye "understands" it.

      red is an abstraction whose connection to how colors work is itself another abstraction, but of a much deeper complexity than 'red' which is a rather direct abstraction as far as abstraction can go nowadays

      • xboxnolifes 2 days ago

        There is absolutely no knowledge needed for someone to point to something that is red and say "this is red" and then for you to associate things that roughly resemble that color to be red.

        Understanding the underlying concepts is irrelevant.

        • Finbel a day ago

          Except I could think they mean the name of the thing, the size of the thing or a million other things. Especially if i have no knowledge of the underlying concept of colors.

    • lo_zamoyski 2 days ago

      "How colors work" is dubious.

      In physics, color has been redefined as a surface reflectance property with an experiential artefact as a mental correlate. But this understanding is the result of the assumptions made by Cartesian dualism. That is, Cartesian dualism doesn't prove that color as we commonly understand it doesn't exist in the world, only in the mind. No, it defines it to be the case. Res extensa is defined as colorless; the res cogitans then functions like a rug under which we can sweep the inexplicable phenomenon of color as we commonly understand it. We have a res cogitans of the gaps!

      Of course, materialists deny the existence of spooky res cogitans, admitting the existence of only res extensa. This puts them in a rather embarrassing situation, more awkward that the Cartesian dualist, because now they cannot explain how the color they've defined as an artefact of consciousness can exist in a universe of pure res extensa. It's not supposed to be there! This is an example of the problem of qualia.

      So you are faced with either revising your view of matter to allow for it to possess properties like color as we commonly understand them, or insanity. The eliminativists have chosen the latter.

      • ajross 2 days ago

        There's no definition for "color" in physics. Physics does quantum electrodynamics. Chemistry then uses that to provides an abstracted mechanism for understanding molecular absorption spectra. Biology then points out that those "pigments" are present in eyes, and that they can drive nerve signals to brains.

        Only once you're at the eye level does anyone start talking about "color". And yes, they define it by going back to physics and deciding on some representative spectra for "primary" colors (c.f. CIE 1931).

        Point being: everything is an abstraction. Everything builds on everything else. There are no simple ideas at the top of the stack.

        • lo_zamoyski 2 days ago

          > There's no definition for "color" in physics.

          This is unnecessarily pedantic. Your explanation demonstrates that.

          > There are no simple ideas at the top of the stack.

          I don't know what a "simple idea" is here, or what an abstraction is in this context. The latter has a technical meaning in computer science which is related to formalism, but in the context of physical phenomena, I don't know. It smells of reductionism, which is incoherent [0].

          [0] https://firstthings.com/aristotle-call-your-office/

          • ffwd a day ago

            > To untutored common sense, the natural world is filled with irreducibly different kinds of objects and qualities: people; dogs and cats; trees and flowers; rocks, dirt, and water; colors, odors, sounds; heat and cold; meanings and purposes.

            It's too early to declare that there are irreducible things in the universe. All of those things mentioned are created in the brain and we don't know how the brain works, or consciousness. We can't declare victory on a topic we don't fully understand. It's also a dubious notion to say things are irreducible when it's quite clear all of those things come from a single place (the brain), of which we don't have a clear understanding.

            We know some things like the brain and the nervous system operate at a certain macro level in the universe, and so all it observes are ensembles of macro states, it doesn't observe the universe at the micro level, it's then quite natural that all the knowledge and theories it develops are on this macro scopic / ensemble level imo. The mystery of this is still unsolved.

            Also regarding the physics itself, we know that due to the laws of physics, the universe tends to cluster physical matter together into bigger objects, like planets, birds, whatever. But those objects could be described as repeating patterns in the physical matter, and that this repeating nature causes them to behave as if they do have a purpose. The purpose is in the repetition. This is totally inline with reductionism.

            • lo_zamoyski 3 hours ago

              > It's too early to declare that there are irreducible things in the universe. [...] We can't declare victory on a topic we don't fully understand.

              This isn't a matter of discovering contingent facts that may or may not be the case. This is a matter of what must be true lest you fall into paradox and incoherence and undermine the possibility of science and reason themselves. For instance, doubting rationality in principle is incoherent, because it is presumably reason that you are using to make the argument, albeit poorly. Similar things can be said about arguments about the reliability of the senses. The only reason you can possibly identify when they err is because you can identify when they don't. Otherwise, how could you make the distinction?

              These may seem like obviously amateurish errors to make, but they surface in various forms all over the place. Scientists untutored in philosophical analysis say things like this all the time. You'll hear absurd remarks like "The human brain evolved to survive in the universe, not to understand it" with a confidence of understanding that would make Dunning and Kruger chuckle. Who is this guy? Some kind of god exempt from the evolutionary processes that formed the brains of others? There are positions and claims that are simply nonstarters because they undermine the very basis for being able to theorize in the first place. If you take the brain to be the seat of reason, and then render its basic perceptions suspect, then where does that leave science?

              We're not talking about the products of scientific processes strictly, but philosophical presuppositions that affect the interpretation of scientific results. If you assume that physical reality is devoid of qualitative properties, and possesses only quantifiable properties, then you will be led to conclusions latent in those premises. It's question begging. Science no more demonstrates this is what matter is like than the proverbial drunk looking for his keys in the dark demonstrates that his keys don't exist because they can't to be found in the well-lit area around a lamp post. What's more, you have now gotten yourself into quite the pickle: if the physical universe lacks qualities, and the brain is physical, then what the heck are all those qualities doing inside of it! Consciousness has simply been playing the role of an "X-of-the-gaps" to explain away anything that doesn't fit into the aforementioned presuppositions.

              You will not find an explanation of consciousness as long as you assume a res extensa kind of matter. The most defining feature of consciousness is intentionality, and intentionality is a species of telos, so if you begin with an account of matter that excludes telos, you will never be able to explain consciousness.

          • ajross a day ago

            > I don't know what a "simple idea" is here

            To be blunt: it's whatever was in your head when you decided to handwave-away science in your upthread comment in favor of whatever nonsense you wanted to say about "Cartesian dualism".

            No, that doesn't work. If you want to discount what science has to say you need to meet it on its own turf and treat with the specifics. Color is a theory, and it's real, and fairly complicated, and Descartes frankly brought nothing to the table.

            • lo_zamoyski 3 hours ago

              > it's whatever was in your head

              That doesn't make anything "simple". Analysis operates on existing concepts, which means they're divisible. It's clear words are being thrown around without any real comprehension of them. This is a stubborn refusal to examine coarse and half-baked notions.

              > If you want to discount what science has to say you need to meet it on its own turf and treat with the specifics.

              Except this isn't a matter of science. These are metaphysical presuppositions that are being assumed and read into the interpretation of scientific results. So, if anything, this is a half-assed, unwitting dabbling in metaphysics and a failure to meet metaphysics on its own turf.

              > whatever nonsense you wanted to say about "Cartesian dualism" [...] Descartes frankly brought nothing to the table

              That's nice. But I haven't "handwaved-away" science. It is you who have handwaved-away any desire to understand the subject beyond a recitation of an intellectually superficial grasp of what's at stake. To say Descartes has nothing to do with any of this betrays serious ignorance.

              See above [0].

              [0] https://news.ycombinator.com/item?id=44014069

  • lo_zamoyski 2 days ago

    Computer science has nothing to do with physical computing devices. Or rather, it has to do as much with computers as astronomy has to do with telescopes. You can do it all on paper. The computing device doesn't afford you anything new, but scale and speed for simulating the mechanical work doing it on paper would. Electrons are irrelevant. They are as relevant to computer science as the species of tree from which the wood in your pencil comes from is relevant to math.

    Obviously, being able to use a computer is useful, just as using a telescope is useful or being able to use a pencil is useful, but it's not what CS or software engineering are about. Software is not a phenomenon of the physical device. The device merely simulates the software.

    This "centering" of the computing device is a disease that plagues many people.

  • jtbayly 2 days ago

    Plenty of programmers know nothing about electrons. Think kids.

    Most programmers never think once about electrons. They know how things work at a much higher level than that.

    • bluGill 2 days ago

      That only works because some EE has ensured the abstractions we care about work. You don't need to know everything, you just need to ensure that everything is known well enough by someone all the way down.

    • DontchaKnowit 2 days ago

      Yeah? So what. Theyre still using abstractions that were created by people who know about electrons.

  • jemmyw 2 days ago

    It doesn't skip over it. First this is an example and not the primary thing it's talking about. But secondly, just above, the article states that some lower level knowledge is necessary in the transit example. If you map those things, as written by someone who, as they say, isn't they knowledgeable about programming, then they make sense without diving into the specific.

  • growlNark 2 days ago

    > I think this too easily skips over the fact that the abstractions are based on a knowledge of how things actually work - known with certainty.

    models ≠ knowledge, and a high degree of certainty is not certainty. This is tiring.

    • AIPedant a day ago

      This seems like a misreading of the comment. The models and knowledge of arrays, classes, etc, are known with "arbitrarily high" certainty because they were designed by humans, using native instruction sets which were also designed by humans. Even if this knowledge is specialized, it is readily available. OTOH nobody has a clue how neurons actually work, nobody has a working model of the simplest animal brains, and any supposed model of the human mind is at best unfalsifiable. There's a categorical epistemic difference.

    • achierius 2 days ago

      But doesn't this argument defeat itself? We cannot, a priori, know very much at all about the world. There is very, very little we can "know" with certainty -- that's the whole reason Descarte resorted to the whole cogito argument in the first place. You and GP just choose different lines to draw.

      • growlNark 2 days ago

        Yes, I agree completely. I think the apriori/aposteriori distinction is always worth making though.

        This really does matter a lot more when floating signifiers get involved; I'm not actually contesting that our models of electrical engineering model reality quite well.

  • inglor_cz 2 days ago

    For me, a computer is at best semi-transparent.

    I can rely on a TCP socket guaranteeing delivery, but I am not very well versed in the algorithms that guarantee that, and I would be completely out of my depth if I had to explain the inner workings of the silicon underneath.

  • pimlottc 2 days ago

    > Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works.

    Haven't you heard about vibe coding?

  • ImHereToVote 2 days ago

    "Nobody in CS is approaching the computer as an entirely black box and making up how they think or hope it works."

    That is literally how we approach transformers.

    • danielmarkbruce a day ago

      Who is "we"? Lot's of people (including me) know how how transformers work. Just because we can't do all the math in our head quickly enough to train a model or run inference mentally, doesn't mean we don't know mechanically how they work.

      • ImHereToVote a day ago

        We know how they are trained. We just don't know how the trained model works, since the program is emergent.

        • danielmarkbruce a day ago

          Lol, we also know how inference works. The fact that LLMs turned about to be surprisingly effective doesn't mean we don't know how it works. There are many fields where we know the underlying physics and it's just difficult to actually predict real world results because there are so many calculations. What's next, you are going to tell an aerospace engineer that flight is "emergent" because we need to run simulations or experiments?

          • ImHereToVote a day ago

            The actual program is a black box. We have been able to dissect some details but the whole system is hard to understand. The program is grown more than developed. Understanding the concept of inference doesn't help you much.

            • danielmarkbruce 21 hours ago

              "the program" is some silly abstraction you've made up. If you don't understand the underlying mathematical operations that's fine, but many of use do. And they aren't that complicated in the grand scheme.

              Every complex system is hard to understand due to the number of variables v human working memory.

              • ImHereToVote 9 hours ago

                This is like saying that understanding water phase changes makes you competent at ice skating. You know what I'm talking about.

perching_aix 2 days ago

For those looking to skip the yap, an overview of the titular "new paradigm" starts a third of the way in, at the section "A GOOD WAY NOT TO DIE".

  • procaryote 2 days ago

    Thank you

    It's still just fluff though. So the author thinks the mind is a control system... Sure, that's a model.

    Does it explain observations better? What predictions does this model let us make that differ from other models?

    The article was needlessly wordy that I might have missed if this was hiding somewhere

    • perching_aix a day ago

      I felt the same way, so I just stopped reading shortly after - can't tell you, sorry.

  • vl 2 days ago

    I put text of the article into ChatGPT 4.5 and used various questions to extract relevant and interesting bits and pieces. Great use case for LLMs.

winddude 2 days ago

I'm being pydantic, "So unlike the thermostat in your house, which doesn’t have to contend with any other control systems, all of the governors of the mind have to fight with each other constantly.", but what about automated blinds, self tinting windows, automatic skylights, humidistats, and humans open and closing windows.

It's also important to note that other control systems in the body that affect control systems in the mind, eg. endocrine.

marviel 2 days ago

There are two good books that explore this concept from different angles:

"The Emotion Machine", by Marvin Minksy (the AI view)

&

"The Art of Empathy", by Karla McLaren (the Internal / Emotional View)

ravenstine 2 days ago

Maybe I'm missing the point of the article, but the application of cybernetics to psychology was already proposed (albeit not by a psychologist) at least as far back as 1960 in the book Psycho-cybernetics. This "new paradigm" doesn't sound particularly new.

This sentence also puzzled me:

> Lots of people agree that psychology is stuck because it doesn’t have a paradigm

Psychology might not have a grand unifying paradigm, but it's been highly paradigm-driven since its inception.

  • profstasiak a day ago

    exactly. I cannot believe they mention cybernetics and psychology in one sentence:

    > The science of control systems is called cybernetics, so let’s call this approach cybernetic psychology.

    but they never mention psycho-cybernetics

hybrid_study 2 days ago

From a layman's perspective I've yet to find a better understanding of psychology than what Steven Pinker posits in hist https://en.wikipedia.org/wiki/How_the_Mind_Works (1997), and even today it doesn't seem like the basic paradigm (the evolutionary constraints) has changed much.

jondlm 2 days ago

The topic of this article felt familiar to me. It's similar to ideas in IFS: internal family systems. IFS also uses control systems to describe our internal landscape.

If the concept of multiplicity (we humans being a system of smaller systems) resonates with you, consider reading No Bad Parts by Richard C. Schwartz. I've personally found it immensely helpful.

  • Melatonic 2 days ago

    Ive heard about this and also some newer works based partially on IFS. Looks very interesting

RetroTechie 2 days ago

Interesting concepts there, as applied to psychology. And kudos for making it available freely. Definitely worth a read imho!

Skimming through its chapter on AI, made me think of Dave from EEVblog fame. In some of his videos he wears a T-shirt saying "always give negative feedback!". Which is correct - for those who understand electronics (specifically: opamps).

In short: design circuit such that when output is above target, circuit works to lower it (voltage, in this context). When below, circuit works to raise it. Output stability requires a feedback loop constructed to that effect.

There's analogies in many fields of technology (logistics, data buffering, manufacturing, etc etc, and yes, thermostats).

I'll leave it there, other sites like Wikipedia (or EEVblog!) better explain opamp-related design principles.

From what I've read, current AI systems appear like opamp circuitry with no (or poor) feedback loop: a minor swing in input causes output to go crazy. Or even positive feedback: same thing, but self-reinforcing. Guardrails are not the fix: they just clip the output to ehm.. 'politically correct' or whatever. Proper fix = better designed feedback loops. So yes, authors of this book may definitely be onto something.

  • jotux 2 days ago

    An emotional analogy I've often made, related to EE, is automatic gain control. When you experience long periods of emotional stability without significant highs or lows, your brain applies a form of gain control. This makes the threshold for something amazing very low and something awful very high. As a result, people can feel overwhelmed by relatively trivial issues. Many self-help books, religions, and philosophies emphasize appreciating past experiences or considering how situations could have been worse. I see this as a way to counteract the brain's natural tendency to adjust its gain control.

  • temp0826 2 days ago

    Been a long time since my ee courses (a degree I never used...but I digress...). How did that saying go?

    Want an oscillator? Design an amplifier. Want an amplifier? Design an oscillator.

standardUser 2 days ago

The monopoly analogy is brilliant and I appreciate the author's focus on the important of language and terms. Plus, the gentle but brutal takedown of academic orthodoxy, such as...

> Those divisions are given by the dean, not by nature.

ikesau 2 days ago

what's unclear to me is how you identify the Real Units.

needing-to-breathe-ness is (probably) a gimme, but what are the units that will explain which route i take on my walk today? and how do you avoid defining units that aren't impressionistic once you need to rely on language and testimony to understand your research subject's mental state?

my understanding of psychological constructs is that they're earnest attempts to try and resolve this problem, even if they've led us to the tautological confusion we're in now.

YeGoblynQueenne a day ago

>> Here’s the meat of The Mind in the Wheel: the mind is made out of units called control systems.

I see someone re-invented Rodney Brooks' Subsumption Architecture [1].

It's a good idea but it doesn't work. Enfin, it works, but up to a certain point, which is how well robotics work today. Yes roombas and industrial robots. No robot maids/butlers and no level-5 self-driving cars. Sure doesn't work as a model of a human mind. I don't think Brooks even had the temerity to suggest it should tbh.

More importantly this just goes to show what happens when psychologists and AI scientists stop talking to each other, the former because they're lost up their own bums [2] and the latter because they've switched focus to creating the next automated shaker of the magick money tree.

What happens is that the ... well, wheel, keeps getting reinvented, over and over again, and nobody is really trying to make, you know, a vehicle. With wheels. That do something else than just sit there being wheel-y.

_________________

[1] https://en.wikipedia.org/wiki/Subsumption_architecture

[2] I just mean the reproducibility crisis.

huijzer 2 days ago

I have done a MSc in CS and PhD in psych dept. Sometimes I had the feeling that the system didn’t want me to get anything done.

  • staunton 2 days ago

    Can you elaborate? It's hard to take much away from your comment without any more details.

    • huijzer 2 days ago

      Well peer review primarily is a very old system where nobody seems to care about speeding up the process. Why also are journals allowed to make huge amounts of money without paying the academics and without providing much services either.

p_v_doom a day ago

Don't think this is a new paradigm. More of a revival of an old one - that of cybernetics. It was incredibly successful at the time, to the point where so many of its findings today are integral to our world, and we kind of forget where they come from and the original principles behind them. I am happy to see this revival though, I think it heralds good stuff...

floxy 2 days ago

Someone need to get the author a copy of GEB.

kgwxd 2 days ago

Flagged? The article appears to have triggered some very interesting discussions here. Exactly the stated purpose. Should we all go back to talking about how the government and billionaires are ruining the world instead?

  • vixen99 a day ago

    And line I really liked: 'But really, I like cybernetic psychology because it stands a chance of becoming overturnable.'

almosthere 2 days ago

Just do stuff. To fix the mind, you have to keep busy. Don't focus on yourself, focus on projects. The long history of humanity, the last thing people did until recently was think about themselves. Stop. Do things.. basically touch grass.

  • perching_aix 2 days ago

    Nothing quite like treating the symptoms instead of the root cause, am I right?

  • yeahsure 2 days ago

    This comment comes across as quite naive. If the solution were truly that simple, we wouldn't see so much people suffering from depression, anxiety, addiction, etc. around the world.

  • Trasmatta 2 days ago

    Just "doing things" will not heal trauma. Doing things can be a way to try to suppress the pain of trauma, but it will still be there, just obscured and affecting the person's life in ways they can't seem to understand. And alternatively, trauma can also be a major blocker for actually being ABLE to do things.

  • bbor 2 days ago

    In the interest of staying within the rules of the site, I'll just say that this is a clueless, offensive, harmful comment. No, you don't know more than a 125+ years of psychologists studying mental health disorders because you make decent money right now and feel ok.

    • d3ckard 2 days ago

      I would like to provide a piece of anecdotal evidence: what he suggests actually worked for me best.

      • kayodelycaon 2 days ago

        I truly hope it does but I have seen plenty of people use that strategy until something broke.

        As I see it, you can either deal with a problem on your own terms or you can let it eventually deal with you on its terms.

      • yeahsure 2 days ago

        Just another N=1 anecdote: It didn’t work for me when I was dealing with depression a few years ago.

        I also lost a dear friend to suicide and he was very succesful and active on his field when it happened. Nobody saw it coming.

        It's just not that simple.