"Hiding" state is necessary to endow it with well-defined invariants. This can be done in many FP languages, too. The semantics-side implications of "encapsulated" state w/ proper invariants have yet to be explored, though, and this is where newer PL formalisms like "homotopy types" might end up being quite helpful.
> The semantics-side implications of "encapsulated" state w/ proper invariants have yet to be explored, though
It seems that that's called a state machine, and OOP objects should come with state charts, but they don't.
> and this is where newer PL formalisms like "homotopy types" might end up being quite helpful.
PL research would actually get adopted if they didn't insist on using the worst possible names for everything. If they're not calling something an "intuitionistic type theory in the calculus of constructions" they're calling it a "pi-calculus".
> It seems that that's called a state machine, and OOP objects should come with state charts, but they don't.
That is because the state chart would so quickly explode into uncountable states or so difficult to understand transitions from state to state, that such a diagram would become instantly useless. Which only goes to show, how unrealistic the idea is, that you can really fully understand such a system and that shows what the problem is. Granted, there may be certain areas of related state, that are separated from other areas, but when the program becomes non-trivial, the lines usually blur, unless some kind of approach is used, which reminds me very much of FP, only that it wraps functions uselessly in classes and objects, instead of making use of modules and functions only.
In an FP style, ideally each function would be a thing you can look at separated from the whole system, if you know what its input can be (which may be difficult). That makes for testable code. I should be able to test every function separately, without having to use ten other classes to make instances to set up an environment, in which I just hope that what I wanted to test, can actually be tested.
> In an FP style, ideally each function would be a thing you can look at separated from the whole system
That is theoretically impossible - complexity is fundamentally not decomposable into parts in the general case. That is, given a complex function, you may not be able to extract one more meaningfully separate part, leaving the core function still too complex.
OOP’s general idea is to encapsulate just enough of the complexity to make it possible to reason about its outside API, while the complexity will live inside, allowing the class to enforce some of its invariants.
Don’t get me wrong, I’m not saying that FP is bad, hell, I think that both paradigms are essential, they are not either-or choices.
I think you are slightly misunderstanding me or I did not put it very clearly. Of course there will be complexity inside functions in an FP style. I think I never said there would not be.
What I want to express is, that I can call every function of the program separately. I might have to put effort into preparing the call's arguments, of course, but I can in the end look at its inputs and outputs in a unit test, separate from the setup of an environment. The environment is basically in the arguments of the function.
The FP paradigm encourages people to avoid global state and state mutation, which helps with reducing the setup effort required to make the arguments for the function call.
In an OOP style program, I cannot simply call and test every part separately. I will have to create a kind of landscape of objects, which experienced the set of state mutations, which hopefully sets up an environment, in which I can test for one specific case of a method doing what it should do in that case. That is the moment, when the state diagram has already exploded into uncountable states, usually impossible to keep all in your head. It might also be the case, that the constructor of an object interferes with the actual setup, that you want to have. Then you will need to apply mutations to change the state to get there, doing more work than ideally would be necessary.
I see OOP maybe still in things like GUI. People are trying to get declarative there as well or functional, but there it seems like a normal thing to have some widget really change state, to avoid overhead of creating a new widget and re-displaying it. But maybe in the future FP will invade this territory as well somehow.
And yes, you can combine FP and OOP, but many common practices used in OOP are detrimental to the advantages FP can bring. I thing it would be best to limit OOP to parts of the system, where it makes sense and then wrap it in an API, which protects the rest of the system from having to use mutation all the time. The question becomes again "What is OOP?". Is it still OOP, if I work with structs and functions working on structs, instead of objects? Do we use Alan Kay's definition with message passing and each object being its own little machine? I think in Erlang we have some combination of it. Like Joe Armstrong said in a talk with Alan Kay, it is either the most or the least OO lang. Well maybe nowadays we have different candidates for that as well.
It might not be better, but at least i don't need a dictionary to understand what it is all about, I just need to read it again and sort out the words in my head.
Absolutely. Why make const char * data private in a string class otherwise? I have know it's in a valid state so I can get it to the next valid state when caller runs an operation on it.
But then a lot of proving (small p prove) an object in c++ is in a valid state amounts to hoare predicates and other spark-ada like expressions.
Certainly FP could do same and like c++ define that away when callers and callees think undefined behavior is gone?
They seem to be necessary if you want a notion of "equivalence" (for both values and types) that enables you to make functions, operations, constructs etc. independent of any notion of "underlying representation" as well as seamlessly applicable across equivalent 'representations'. This is desirable in both higher mathematics (where homotopy types were first developed) and software engineering, for much the same reasons.
I read about this a while ago, I'm not an expert but this is my take on it:
In homotopy type theory an equivalence of types is a first class value that you can manipulate. And also you can separate types from their 'implementations'. Classic example is you have a Nat type, with a Peano construction (Nat is a zero or a successor of a Nat.) This is not very efficient, but you write functions with it, prove things etc. It's time to optimize, and you change your Nat implementation to something more efficient (e.g. a Nat is Zero, or twice a Nat, or twice a Nat + 1). Your functions and proofs that you wrote with the previous implementation still will work and your type signatures won't have to change