Yes! Every functional codebase I see seems to be littered with the artifacts of an effort to escape the reality that the known universe and everything that happens within it is stateful... including our code. It’s inescapable, so why does the FP crowd make state even more difficult to reason about through batshit crazy abstractions? I’m honestly completely baffled by the FP movement and the trend of talking down OOP as some simpleton concept that should be abandoned.
Nothing in the universe is stateful. You only think otherwise because you impose an object abstraction on things and treat their evolution over time in a non-rigorous way. This lack of proper modeling of the effects of time on a system is what leads to all the problems with managing state.
FP makes this relationship of change to time explicit. Instead of having a "single object" that "changes state" when some event occurs, you have two objects - one before the event, one after. Systems that rely on mutable state conflate these two and then force you to deal with the consequences of that conflation.
Not quite. The computer you're referring to has one state before an event, and another state afterwards.
Modeling those as two distinct, immutable states provides enormous benefits over modeling it as a single state that "mutates". Doing this eliminates a large class of serious bugs.
The reason for these benefits is that far from trying to "escape the reality that the known universe and everything that happens within it is stateful," functional programming uses a more rigorous model of state that better reflects the relationships between states over time.
You’re points assume that OOP programs consist of a single god class stuffed full of nothing but mutable properties... which is obviously not sound OOP design.
It doesn't have to be god classes. Any mutable property that's accessible beyond a local scope raises these issues, and involves the collapsing of state over time.
That's not to say it's never valid to make the conscious engineering choice to do that, but it's a poor default to have. Functional programming languages have proved that this can be handled better.
Not necessarily - it depends on the language and how closures are implemented.
For example in Java, closures, created via lambda expressions or anonymous classes, can only access immutable ("final") variable in their enclosing scope. (At least that was true in Java 8, not sure if that was relaxed later.) So they don't export mutability of local variables. (Although they don't prevent you from mutating object fields from with a closure - again, a consequence of the host language semantics.)
Of course, closures in other imperative languages often violate FP immutability constraints. But that's only because they're given unrestricted access to the underlying imperative language features.
An interesting case is OCaml, which allows closures to mutate ref cells, e.g.:
let x = ref 5 in
let set y = x := y in
set 4;
x;;
...which outputs 4. This is compatible with OCaml's approach to mutation, and the mutable variable is at least not the default variable type - it's an explicitly mutable cell value. But this certainly breaks various good properties that immutability provides.
For example, the function `set` above provides no indication at the type level that it performs a side effect, which undermines the ability to reason about function behavior without examining the implementation.
If you want to say OCaml is only providing an FP facade which is broken by this behavior, I'm sure many Haskell programmers would agree with you. :)
If you're using them to pass around mutable state, yes.
Which is why immutable/pure-functional is always the best first choice.
If it's inconvenient to do something with pure functions, and you want to pass a bit of state around, you can use the State monad - it's 'simulated' mutable state, in that it's all pure functions under the hood, but it looks and feels like you're doing mutation.
If it's too inefficient to simulate state via the State monad, you can use state threads (ST). This is more efficient in that it compiles down to real mutations, but it's safe in that you can't share your mutations until you've exited the ST. So it should force determinism. This is what you'd use for a fast, mutable, in-place sort.
The above two solutions rule out having multiple writers, which is where STM (software transactional memory comes in). STM gives you atomic blocks which behave more or less like (in-memory) database transactions. I mainly use it for implementing workers and jobs and queues, etc.
And if you actually just want to do anything then you use IO.
It's more about being aware (in a compiler-checkable way) whether you're mutating or not than it is about outlawing mutation.
If that were the case I'd just fix your comment instead of appending another comment after the fact. But that would make the arguments harder to reason about I think.
> including our code
Do you use git? Or does your team all just edit the same code on a shared server? (The OO solution here is to make state 'private'. It supposedly doesn't really matter than you're all editing the same documents at the same time, because they're behind getters and setters)
> why does the FP crowd make state even more difficult to reason about?
State is difficult to reason about so we maximise out stateless code.
> talking down OOP
I talk down OOP when I treat it as the diff between OOP and FP. That is, coding is 80+% the same whatever you use. It's the last 10-20% where the differences emerge: Lambdas are better than anonymous inner classes. Not having null is better than having null. Not needing to cast is better than casting. Composition is better than inheritance. Stateless is better than stateful. Value-equality is better than reference-equality. Generics + type erasure is clunky without higher-kinded types.
> The OO solution here is to make state 'private'. It supposedly doesn't really matter than you're all editing the same documents at the same time, because they're behind getters and setters
This just isn't true in the slightest sense. Honestly, where does this kind of anti-OOP trope come from?
It isn't a direct bashing of OOP. It's a bashing of the claim (see wiki quote) that you can make state manipulation safe by hiding it away inside a class.
Encapsulation
Encapsulation is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse. Data encapsulation led to the important OOP concept of data hiding.
It's arguably the core concept of OO, and I don't believe it keeps data 'safe from outside interference and misuse'.
So doesn't FP make similar claims about mutation, i.e., "make state manipulation safe" through mechanisms like closures? How/why are closures considered more palatable?