C++ 20's concepts are typeclasses and make object-based polymorphism obsolete - shockingly!

So for contentious topics that have been discussed elsewhere (probably), I like to consult wikipedia. Probably asking some Large Language Model about a definition for object-oriented programming will give the same results.

The way I read the wikipedia article: Things mentioned first must be important and defining. Things further down the page less though. No guarantee that this is accurate. The first sentence (as so often with wikipedia articles) is quite good, though:

Object-Oriented Programming (OOP ) is a programming paradigm based on the concept of “objects”, which can contain data and code.

I don’t think it’s possible to get a grip on the meaning of OOP w/o appealing to some intuition of an “object”. From what I understand, an object in OOP is meant to be an abstraction layer that helps the programmer modelling a problem. How exactly this is done, depends on the specific brand of OOP.

Your definitions:

  1. I agree that the notion of objects is sold in a package with some rudimentary memory management at minimum. I contradict that this alone rules out Haskell. More then rudimentary memory management shouldn’t be a violation of object-orientation per se.

  2. Maybe that fits with what I called “some intuitive notion of what is an object”.

  3. Yes, I would want to add sub-type polymorphism to the list of defining characteristics for OOP. Just reading the wikipedia article, that isn’t so clear to me anymore. E.g. what about duck-typing? But no sub-type polymorphism whatsoever would (probably) violate the intuitive notion of objects.

I would want to add to any of those definitions: OOP as a programming paradigm has been presented to me as a “best practice” to follow, in the sense that I am supposed to model a given problem in terms of objects and then implement those (e.g. in the form of a class hierarchy, or just class members).

But now consider the following: In the first sentence from Wikipedia, it is stipulated that the objects contain data and code. Now having “code” inside objects isn’t conceptually sound (to me, that is - maybe someone can explain). By that I mean the very statement is non-sensical. In Python, the self parameter is explicitly passed to member functions. In C++ this is implicitly passed. But in what sense do objects contain code?

In C++, a class defines a scope and the class methods are inside that scope - OK. But having data and functions organized differently shouldn’t really matter when talking about programming concepts in the abstract, I would argue. The fact that C++ has strict typing is much more important than the class scopes.


 that is unless you consider the history of OOP as a successor of merely “structured programming”, which really is about the Where as in “Where to put this code?” or “should this be inside main or should this be its own procedure`?”

So this is why I like watching the deconstruction of OOP: It’s a wild mix of programming practices and actual concepts. Those OOP concepts exist on their own and are not enough to define OOP. And OOP is not necessary to enjoy the benefits of the concepts associated with OOP.

There just doesn’t remain a lot.

OOP could be a sound abstraction layer - but it turns out, the intuitive notion of an object is just way too general. The intuitive notion of a function (the mathematical notion), is specific. This is why you can have the function-semigroup. The intuitive notion of an ADT: same. Objects, in contrast, are a dead end. When someone tells you, they built an object, you still don’t know anything about what you can or can’t do. In theory, there could be a canonical object hierarchy (like there is the typeclass hierarchy in Haskell) - but in practice it never came about. This is why OOP remains a ghost to me, one that we chase away.

We are in luck. I like nothing more than ideas.

Things that have fields are most likely what Category Theory calls «products». Once you unfold the definition of a «field» that you used in your definition of object orientation, I think it will turn out to be that of a product.

Aside: This would be an interesting experiment to run. What do you call a «field»?

So, I am with you so far as «fields» go. But what is this «dynamic dispatch» thing? I do not see how you can define «dynamic dispatch» without defining a paradigm of computation and a big bunch of ancillary notions. I await that this will make you realize that you actually imply so much in your short definition that a long, formal definition of object orientation is most near.

I think my offering № 2 is exactly the formalization of the notion of «object» by itself. It seems you glanced over it, and then you say:

I shall take the challenge.

  • If you give me an example of an object, in any language, I shall explain how it is a dynamical process S → S, an automaton S × E →S or a computation with state S × I → S × O.
  • If you give me an example of a dynamical process, automaton or computation with state, I shall furnish you an object that emulates it in constant space, in an informal language alike to C#.

Thereby, we shall informally establish an isomorphose between these two notions. From there, surely we shall better see what is «object oriented programming».


This is spot on what my offering № 1 is about. What I said is that C++ is a successor of C that allowed the programmer to write the layout of their stuff in memory (a struct) and its way of claiming and freeing memory (constructors and destructors) nearby one another. Other procedures that need to know the memory layout naturally also become put close, and it all gets wrapped into an «encapsulation», such that no one else can know the memory layout. That no one knows the memory layout other than the «methods» so wrapped gives you an easy proof of memory safety: you only need to prove that methods are memory safe and the rest of the code is automatically memory safe because it can only work with memory through these methods. You can add other «invariants» or «requirements» to your encapsulated data structure and again you only need to prove that they hold for the methods. So, all the stuff you need to think hard about is written in one place. I think this «constant space complexity of thought» is the big reason object oriented programming caught on. I say people fail to appreciate it because they do not think of themselves as proving anything — but of course they are proving stuff, even if only at the shallowest level of mathematical rigour.

1 Like

that is exactly my experience: adhering to OOP is supposed to help with cognitive load. “Having code in one place” is/was innovative.

One could still argue that the rule should only be violated for the sake of some even bigger principle. And those bigger principles might be the abstractions of FP - but indeed, you convinced me that there is a distinct thing, a concept, that deserves a name (OOP) and it is even in conflict with FP.

That may not be the intent, but that seems very condescending to me to the point where I no longer wish to engage, so I won’t be checking back here or bothering to elaborate.

I offered my sincere apologies privately. I should also offer my apologies to the Haskell community that I love so much for staining its image with what is evidently unacceptable behaviour. @moderators please consider revoking my privilege to speak here.

I should also offer my apologies to the Haskell community that I love so much for staining its image with what is evidently unacceptable behaviour

Thank you @kindaro. Our community contains people from all sorts of cultures and backgrounds, many of us with English as a second or third language. So we will all, from time to time, mis-communicate by accident. I certainly do, and English is my native language! But accidents are not fatal. We can be gracious to each other. We can apologise, learn, forgive, and (always!) strive to do better.

Our guidelines for respectful communcation are here.

6 Likes

The way I undestand it (I’ve done my fair bit of OOP 2 decades ago) is you can do object.x (to access a value) in the same that you can do object.f() to access some. In that sense objecthas some code or if you prefer every method is a value to a function pointer and the final dispatching is decided by the object instance itself. How it happens in practice is only implementation details.

I argued that object.f() isn’t conceptually different to f(object). @kindaro indeed convinced me, however, that the fact that f is defined within the class scope (C++) or within the module (smalltalk) does matter.

I finally watched the talk to the end.

It seems the C++ talk that I linked in the original post claims to have an answer to your open question: In modern C++, subtype polymorphism is redundant thanks to its parametric replacement. That’s the “bold statement” of the speaker.

And what I learned: you nail down the difference between subtype polymorphism and parametric polymorphism to the fact that, in the absence of subtypes, type constraints are type equalities. And thus type inference is ubiquitous in Haskell and sporadic in Java.

That’s only part of the story though.

If OOP were just a matter of having objects as bundles of data and code, then it wouldn’t be super controversial - we use the “record of functions” (or “record of procedures”) pattern all the time in Haskell, C, etc., whenever we need some degree of runtime polymorphism that goes beyond a single closure or callback.

What sets full-blown OOP apart from the record-of-functions pattern is that those functions, a.k.a. “methods”, can call back into other methods of the objects through which they were invoked, and when they do, the method will be resolved (“dispatched”) based on the runtime object (“dynamically”), not the statically declared or inferred type. E.g., if we define an object parent with methods getName and sayHello, such that sayHello calls into getName, and then we define another object child that inherits sayHello from parent, but overrides getName to return a different name, then when we call child.sayHello, it will resolve to parent.sayHello, but when parent.sayHello then calls into getName, it resolves based on the runtime method dictionary, and end up at the overridden child.getName, not the statically expected parent.getName. (Note that I carefully avoided the word “class” here: the above holds both for class-based flavors of OOP, like in Java, as it does for ad-hoc or prototype-based flavors, like in JavaScript - the actual inheritance mechanism doesn’t matter, as long as method calls are resolved dynamically, which they are in both cases).

This is a powerful tool, and it comes with some severe downsides, especially if you care about purity and statically enforced types. But it is also a tool that is rarely needed; most of the time, statically decidable forms of polymorphism are powerful enough, and unlike dynamic dispatch, we can reason about them statically.

So that’s really what the “OOP vs. functional” debate is about - what’s more important, dynamic dispatch or statically controlling effects?

As an aside, implementing full-blown OOP with dynamic dispatch and all is perfectly possible in Haskell, and it even results in a somewhat ergonomic API - but interestingly, such a Haskell OOP library will have largely the same limitations as existing OOP languages.

3 Likes

Yes, as far as I can tell that is the essence of OOP, in the sense that it’s the aspect that is not shared with other paradigms: methods on superclasses can end up calling methods of subclasses, even though the subclass probably doesn’t exist at the definition site of the superclass.

1 Like

I think the term which describes that behaviour is called “open self recursion”.

1 Like

I’ve always been fond of Alan Kay’s definition:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things.

(source)

Of course, this excludes languages like C++ and Java
 and for that matter, one could say that it includes Haskell, given what you can do with effect systems.

Yes, that’s a very good definition of what Kay had in mind, but that’s (unfortunately) not exactly what people mean today when they say “OOP”.

I think calling Haskell an “OOP Language” in the Kay sense, though, is a bit of a stretch - we can do Kay-style OOP in Haskell, but the same goes for a number of other languages, if you write enough library code for them. Maybe the only mainstream-ish language that has something resembling Kay’s ideal embedded into its core is Erlang, with its actor model and all that.

1 Like

Yes, or just “open recursion” or “late binding”.

1 Like

Indeed - but I find “dynamic dispatch” more descriptive.

1 Like

This is a huge stretch.

Haskell’s effect systems are irrelevant here. Everything you can do with effects, you can do without them, and in other languages. And I don’t see how message sending is even related to effect systems.

3 Likes

send is the idiom for launching a GADT constructor (a “message”) into the effect system and getting a response back. Feels like messages in that spirit to me. Just wired up differently.

1 Like

This is also a big stretch. This send has a very weak relation to Kay’s messaging. The latters is much closer to Erlang and actor models.

1 Like