This article doesn't explain the benefits of the suggested approach well enough.
And the last example looks like a poor advice and contradicts previous advice: there's rarely a global condition that is enough to check once at the top: the condition usually is inside the walrus. And why do for walrus in pack {walrus.throbnicate()} instead of making throbnicate a function accepting the whole pack?
Code complexity scanners⁰ eventually force pushing ifs down. The article recommends the opposite:
By pushing ifs up, you often end up centralizing control flow in a single function, which has a complex branching logic, but all the actual work is delegated to straight line subroutines.
The way to solve this is to split decisions from execution and that’s a notion I got from our old pal Bertrand Meyer.
if (weShouldDoThis()) {
doThis();
}
It complements or is part of functional core imperative shell. All those checks being separate makes them easy to test, and if you care about complexity you can break out a function per clause in the check.
IME this is frequently a local optimum though. "Local" meaning, until some requirement changes or edge case is discovered, where some branching needs to happen outside of the loop. Then, if you've got branching both inside and outside the loop, it gets harder to reason about.
It can be case-dependent. Are you reasonably sure that the condition will only ever effect stuff inside the loop? Then sure, go ahead and put it there. If it's not hard to imagine requirements that would also branch outside of the loop, then it may be better to preemptively design it that way. The code may be more verbose, but frequently easier to follow, and hopefully less likely to turn into spaghetti later on.
(This is why I quit writing Haskell; it tends to make you feel like you want to write the most concise, "locally optimum" logic. But that doesn't express the intent behind the logic so much as the logic itself, and can lead to horrible unrolling of stuff when minor requirements change. At least, that was my experience.)
I took a version of this away from Sandi Metz’s 99 Bottles of OOP. It’s not really my style overall, but the point about moving logic forks up the call stack was very well taken when I was working on a codebase where we had added a ton of flags that got passed down through many layers.
Yeah, I immediately thought of "The Wrong Abstraction" by the same author. Putting the branch inside the for loop is an abstraction, saying "the for loop is the rule, and the branch is the behavior". But very often, some new requirement will break that abstraction, so you have to work around it, and the resulting code has an abstraction that only applies in some cases and doesn't in others, or you force a bunch of extra parameters into the abstraction so that it applies everywhere but is hard to follow. Whereas if you hadn't made the abstraction in the first place, the resulting code can be easier to modify and understand.
Terrible advice. It's the exact opposite of "Tell, don't ask".
Performance of an if-statement and for-loop are negligent. That's not the bottleneck of your app. If you're building something that needs to be highly performant, sure. But that's not the majority.
I like this a lot. At first, putting ifs inside the fors makes things more concise. But it seems like there's always an edge case or requirement change that eventually requires an if outside the for too. Now you've got ifs on both sides of the for, and you've got to look in multiple places to see what's happening. Or worse, subsequent changes will require updating both places.
So yeah, I agree, pulling conditions up can often be better for long-term maintenance, even if initially it seems like it creates redundancy.
There’s a lot of variable hoisting involved in loving conditional logic out of for loops and it generally tends to improve legibility. If a variable is loop invariant it makes debugging easier if you can prove it is and hoist it.
I really don't think there is any general rule of thumb here.
You've really got to have certain contexts before thinking you ought to be pushing ifs up.
I mean generally, you should consider pushing an if up. But you should also consider pushing it down, and leaving it where it is. That is, you're thinking about whether you have a good structure for your code as you write it... aka programming.
I suppose you might say, push common/general/high-level things up, and push implementation details and low-level details down. It seems almost too obvious to say, but I guess it doesn't hurt to back up a little once in a while and think more broadly about your general approach. I guess the author is feeling that ifs are usually about a higher-level concern and loops about a lower-level concern? Maybe that's true? I just don't think it matters, though, because why wouldn't you think about any given if in terms of whether it specifically ought to move up or down?
This article doesn't explain the benefits of the suggested approach well enough.
And the last example looks like a poor advice and contradicts previous advice: there's rarely a global condition that is enough to check once at the top: the condition usually is inside the walrus. And why do for walrus in pack {walrus.throbnicate()} instead of making throbnicate a function accepting the whole pack?
Code complexity scanners⁰ eventually force pushing ifs down. The article recommends the opposite:
By pushing ifs up, you often end up centralizing control flow in a single function, which has a complex branching logic, but all the actual work is delegated to straight line subroutines.
⁰ https://docs.sonarsource.com/sonarqube-server/latest/user-gu...
The way to solve this is to split decisions from execution and that’s a notion I got from our old pal Bertrand Meyer.
It complements or is part of functional core imperative shell. All those checks being separate makes them easy to test, and if you care about complexity you can break out a function per clause in the check.IME this is frequently a local optimum though. "Local" meaning, until some requirement changes or edge case is discovered, where some branching needs to happen outside of the loop. Then, if you've got branching both inside and outside the loop, it gets harder to reason about.
It can be case-dependent. Are you reasonably sure that the condition will only ever effect stuff inside the loop? Then sure, go ahead and put it there. If it's not hard to imagine requirements that would also branch outside of the loop, then it may be better to preemptively design it that way. The code may be more verbose, but frequently easier to follow, and hopefully less likely to turn into spaghetti later on.
(This is why I quit writing Haskell; it tends to make you feel like you want to write the most concise, "locally optimum" logic. But that doesn't express the intent behind the logic so much as the logic itself, and can lead to horrible unrolling of stuff when minor requirements change. At least, that was my experience.)
I took a version of this away from Sandi Metz’s 99 Bottles of OOP. It’s not really my style overall, but the point about moving logic forks up the call stack was very well taken when I was working on a codebase where we had added a ton of flags that got passed down through many layers.
https://sandimetz.com/99bottles
Yeah, I immediately thought of "The Wrong Abstraction" by the same author. Putting the branch inside the for loop is an abstraction, saying "the for loop is the rule, and the branch is the behavior". But very often, some new requirement will break that abstraction, so you have to work around it, and the resulting code has an abstraction that only applies in some cases and doesn't in others, or you force a bunch of extra parameters into the abstraction so that it applies everywhere but is hard to follow. Whereas if you hadn't made the abstraction in the first place, the resulting code can be easier to modify and understand.
https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction
Terrible advice. It's the exact opposite of "Tell, don't ask".
Performance of an if-statement and for-loop are negligent. That's not the bottleneck of your app. If you're building something that needs to be highly performant, sure. But that's not the majority.
I agree, except for this example, where the author effectively (after a substitution) prefers the former:
The latter is not only more readable, but it is safer, because a match statement can ensure all possibilities are covered.I like this a lot. At first, putting ifs inside the fors makes things more concise. But it seems like there's always an edge case or requirement change that eventually requires an if outside the for too. Now you've got ifs on both sides of the for, and you've got to look in multiple places to see what's happening. Or worse, subsequent changes will require updating both places.
So yeah, I agree, pulling conditions up can often be better for long-term maintenance, even if initially it seems like it creates redundancy.
There’s a lot of variable hoisting involved in loving conditional logic out of for loops and it generally tends to improve legibility. If a variable is loop invariant it makes debugging easier if you can prove it is and hoist it.
I really don't think there is any general rule of thumb here.
You've really got to have certain contexts before thinking you ought to be pushing ifs up.
I mean generally, you should consider pushing an if up. But you should also consider pushing it down, and leaving it where it is. That is, you're thinking about whether you have a good structure for your code as you write it... aka programming.
I suppose you might say, push common/general/high-level things up, and push implementation details and low-level details down. It seems almost too obvious to say, but I guess it doesn't hurt to back up a little once in a while and think more broadly about your general approach. I guess the author is feeling that ifs are usually about a higher-level concern and loops about a lower-level concern? Maybe that's true? I just don't think it matters, though, because why wouldn't you think about any given if in terms of whether it specifically ought to move up or down?
The author's main concern seems to be optimising performance critical code.
If your compiler is having trouble juggling a number of variables just think how much difficulty the human brain will have on the same code.