As a professional dev who has made a career out of working in oop languages and codebases, it took me far too long to realize that when it comes to oop, the emperor has no clothes.
To this day, oop advocates can't even agree on what oop even is or means.
Apparently oop as envisioned by Alan Kay was supposed to work like cells in the body that pass messages between each other and take actions independently.
Why? Who knows! It was never really explained why literally one of the most complex systems imaginable, one that we still really have very little idea how it even works, should be the model for what could and should probably be a lot simpler.
Today's modern oop languages are probably very far from what Kay envisioned (whatever that was), but it's remains unclear why classes and objects are "better" than the alternatives.
And before anyone goes and comments aksully code organization blabla like yes but code organization can be great or shit in oop or fp or procedural codebases, it has nothing to do with the "paradigm".
Let alone that the entrenched, canonical, idiomatic coding styles of most modern oop languages encourage state, mutability, nulls, exceptions and god knows how many trivially preventable entire classes of errors. Granted, most have now started to come around and are adopting more fp features and ideas every year, but still.
Don't get me wrong, writing programs like cells in the body that pass messages between each other and take actions independently is an interesting idea which deserves pursuing, if nothing else but to satisfy our curiosity and seeing to what if anything it's applicable and suited. (and even if the answer turns out to be "nothing", we've still learned something!)
But going from there to making strong claims about it being a more or less universally superior paradigm for computing and writing code, with little to zero evidence, that's a huge, huge stretch.
To the degree Erlang and Actors work, I think that's kind of a happy coincidence, and not due to any rigorous work on Alan Kay's part.
>Apparently oop as envisioned by Alan Kay was supposed to work like cells in the body that pass messages between each other and take actions independently.
IIRC he was interested in an abstraction that worked "all the way up" - he wanted the abstraction to essentially be a tiny computer. It's a cool idea with a lot of power. But it really is too powerful for applications, and leads to the same complexity problems you get in large distributed systems. And to not have 'function' as a primitive is inexcusable in any language.
Java's lack of a function primitive is a key weakness. It means you must have at least one "utility class" per project - a public class with static methods that are themselves simple functions. (The relatively recent introduction of lambdas, or anonymous inner classes, does not really address the problem. That's just ugly syntax sugar).
The problem with Java's everything-must-be-a-class OO fundamentalism is that a computer is actually has data AND executable code. Java forces everything into a noun (class/data), but some things are actually verbs. You can these because they have awkward names, like Runnable.
Oh sure. Java should have had function pointers sooner. The functional interfaces are just ok. C#'s actions and delegates are a bit better, imo.
But the parent went so far as to say that scoping functions to a class name is bad and I just don't see it. I think a pure global scope would make Java worse.
You've never seen Java (or C#) code where a bunch of unrelated functions belong to some meaningless class like "Utility"?
I do think both languages would benefit from allowing/ encouraging namespace-level functions (C# comes close with static classes but that's pretty clearly an abuse of any logical definition of the word "class"). However I probably wouldn't use them for implementing "controllers" in web applications - while I tend to agree it's not a good use of OOP, having key information about the current request/response available via properties of the current controller (and hence kept separate from function-level parameters that represent semantics specific to the endpoint being handled) plus various built-in functionality at the framework-supplied "controller" level isn't something I'd see a benefit in giving up.
But I'll admit there are few cases where a single controller class with methods for various CRUD like operations on a particular resource type makes any sense from an OOP perspective. I did think for C# what might actually work is to be able to define extension methods on the framework Controller class and have an automatic routing mechanism based on convention (e.g. if you defined an extension function "Get_Widget" then GET requests to /widgets would be automatically routed to it. Or it could be based on the namespace). But in general it would likely make the code more verbose (lack of implicit "this") for questionable gain.
> You've never seen Java (or C#) code where a bunch of unrelated functions belong to some meaningless class like "Utility"?
Is that worse than the global scope as the dumping ground?
One other benefit of forcing classes is that there's a logical place for private functions and fields. Private state is an anti-pattern to some but it's more consistent design for Java, I think.
> Is that worse than the global scope as the dumping ground?
I wouldn't have a problem with requiring everything to at least be in a namespace, though it needn't be enforced at the language level (vs a linter rule).
I don't have any problem with it not being OOP, but I do have a problem with the abuse of the concept of a "class". I also think it's rarely the ideal way to organise your code (even if the .NET core framework has examples of it - WebUtility/HttpUtility etc., and the JRE may well do too, though I can't see any obvious examples in the java.util namespace).
The entire java.util namespace is the example here.
Using static methods on a class seems perfectly fine to me; that's what they're there for. If you have a handy utility class to work around problems in the standard library, then that's an issue with the standard library; Java has this but has gotten better over the years.
Might be heresy to come in to an OOP-bashing thread and say this, but the noun thing fits my mental model perfectly. IMHO, verbs aren't 'done' by nothing and data rarely exists without belonging to something, so it feels fine to me. I always find these discussions interesting though because clearly a lot of people feel differently (and very strongly in some cases).
(Please don't take this comment as an endorsement of OOP as practiced in the corporate world or that traumatic memory you have of an OOP codebase.)
Inform7 Actions and Rules system made me reconsider the centrality of the notion of Object when it comes to dispatching (note: Inform7 reads like english). Inform7 takes a verb-centric stance, then consider the subject (i.e. "this", in OOP), then the object (i.e. the arguments to a method). You can inject more specific behaviors in an action (i.e. something akin to a multimethod in CLOS), by adding an adjective to a verb/method complement/arguments. You can even use adjectives on the verb/sentence. All of this is gathered within a rulebook associated with the verb that exists aside of the actual source code: you can write code linearly, following the natural order of a given user story from the domain you model, using multiple verbs in succession, but can still display all a verb's rule, encompassing many user stories. Most importantly, it is truely extensible because you can't add code to an action's rulebook without doing so through a rule, which has the same role as a junction-point in aspect oriented programming or even subject-oriented-programming.
My obtuse point is that though sorting your source code files into various directories offers no functional benefit*, some folk find it useful from an organisational perspective. Maybe there's an element of that in housing functions an object.
*bear with me - admittedly there are languages/technologies where directory structures are required, but hopefully you get my point.
You didn't really address the question. Because a static method on a static class is a verb, even in the kingdom of nouns. It's just namespaced. So what's the issue? From a practical point of view.
Personally issue with it (besides the pointless noise of having to make a private constructor and such) is that you can't alias them like you can with actual namespaces/modules in most languages, which becomes a pain when eg. you need functions from both Apache's StringUtils and your own StringUtils in the same file.
Generally it goes with unnecessary boilerplate that gets copied around without any thought to whether it's necessary.
People who have been harmed by too much oop won't just define a function and call it. They'll define a class solely to contain that function and then call it more verbosely.
On the face of it it's not that bad but once you start tolerating a habit for doing things because that's how they're done and not because you actually think it's the right way you end up picking up all of these fragments from places where they do belong and forcing them into places where they don't.
Or at least, that's what I struggled with when I was new and insecure and wanted people to look at my code and think that I knew what I was doing because I showed evidence of knowing the dogmatic patterns, and I've seen it in others too.
Classes are basically namespaces, and trivial code doesn't need them, but quickly you do.
A lot of the criticisms being leveled here at oop actually me question their programming experience. There are many valid criticisms, but not a lot levelled here.
If all you're doing is using classes as superfluous namespacing, we'll that's hardly a threat. It's when they start having superfluous constructors that you're on a dark path. I once knew a guy who had a when-in-doubt-make-it-a-singleton habit, it was maddening to write tests anywhere near him.
Obviously that's an amateur move (and being an amateur myself I didn't have the confidence to show him a better way). But then it's never the skilled practitioners that make something look bad anyway.
> Why? Who knows! It was never really explained why literally one of the most complex systems imaginable, one that we still really have very little idea how it even works, should be the model for what could and should probably be a lot simpler.
From my understanding, it is that every object has a very limited set of functionalities it manages. And that a goal is for each object to know as little as possible of the outside world. And along with that, to not bind to specific objects/classes but to bind to specific messages (Including multiple dispatch)
If you think that OOP advocates can't decide on what OOP is, don't look too hard at the FP side of the world. :D
Same goes for evidence based ideas. Odds are stupid high that you cannot find a single large scale codebase that has succeeded using any "pure" technique. Heck, I'd take small scale codebase for a fun thing to look at, at this point. Make performance a requirement, and really get ready for tears.
Jonathan Rees had said: "Here is an a la carte menu of features or properties that are related to these terms; I have heard OO defined to be many different subsets of this list." The aspects he names: "encapsulation", "protection", "ad hoc polymorphism", "parametric polymorphism", "everything is an object", "all you can do is send a message", "specification inheritance = subtyping", "implementation inheritance/reuse", "sum-of-product-of-function pattern". http://www.paulgraham.com/reesoo.html
> Apparently oop as envisioned by Alan Kay was supposed to work like cells in the body that pass messages between each other and take actions independently.
>Let alone that the entrenched, canonical, idiomatic coding styles of most modern oop languages encourage state, mutability, nulls, exceptions and god knows how many trivially preventable entire classes of errors. Granted, most have now started to come around and are adopting more fp features and ideas every year, but still.
State isn't a feature solely of OOP. It's very easy to see the FP craze as a similarly dogmatic insistence on pure everything as removing useful features instead of preventing errors.
If FP truly was the indisputable way of the future, Scala would be the CRUDspeak of choice by now.
> Apparently oop as envisioned by Alan Kay was supposed to work like cells in the body that pass messages between each other and take actions independently.
When I think of programs I think of modules of code that act like machines on a conveyor belt or assembly line operating on and pushing data along from station to station. This is why Erlang, Go and Plan 9's thread(2) library are how I want to program.
> To the degree Erlang and Actors work, I think that's kind of a happy coincidence, and not due to any rigorous work on Alan Kay's part.
What you are describing is what a directed acyclic graph achieves. It works well when you need to do a series of transformations. It does not work well when you need branching, loops and state at a high level. The mistake people make is that they see it work very well for certain parts of their software, then they give in to silver bullet syndrome and try to make it work for everything.
Most oop was designed in the context of desktop gui apps, which are much more complex that web apps imo.
Is-a vs has-a is a lot more clear in context of widgets and windows. It still isn't perfect, most popular gii used inheritance whereas composition would probably be better.
Inheritance ended up as a core component of oop principles when ultimately it is a flawed approach compared to more flexible composition methods.
Liskov substitution was a massive success, classes were an improvement over structs, private vs public was generally a good idea even if the astro architects went nuts with it.
> oop languages encourage state, mutability, nulls, exceptions and god knows how many trivially preventable entire classes of errors. Granted, most have now started to come around and are adopting more fp features and ideas every year
Do functional languages actually discourage "state, mutability, nulls, exceptions"
I know why we might argue against these features, but implying that functional programming doesn't have those features seems a little weird (or equivalent e.g. call/cc can cause execution flow less predictable than exceptions)!
My limited experience with FP languages suggests they encourage very explicit state management. You remove a lot of the footguns from having side-effects all over the place at the cost of the (sometimes, depending on the tooling and your familiarity with it) it being more complex to introduce state or external side-effects when you want them.
Once I got the hang of Elixir/Erlang it was just such a weight off my shoulders, honestly. Programming without entire classes of bugs is so much nicer.
And yet despite all these criticisms and complaints, C++, Java, Python, Javascript, PHP and even Ruby to some extent, have been massively popular and used to write all sorts of software we rely on.
What people are complaining about when they voice these sorts of concerns goes back to the concept of essential vs accidental complexity proposed by Fred Brooks. The issue is not that adding accidental, inessential complexity to software is somehow a sin. The problem is that someone will have to wrangle this extra complexity. Enterprises can just throw bodies at the problem, creating drudgery. The warning is against producing pointless, boring work.
Which is what’s great about these languages - best of both worlds.
I also think this discussion shouldn’t be about languages at all, you can have objects of some form in any imperative language including C. It’s more about what design patterns make sense for specific problems, and sometimes the answer is OO-style (eg with class hierarchies) and sometimes not.
I do some programming, but not OOP. Over the years I've read about it and asked colleagues about it, and the why question finally settled on it helps you organize your code. I can get behind that. But I also want to understand why OOP instead of conventional programming; I'm still waiting for that part to click. Until that question is answered it won't be capturing my full attention.
I think the root rant here, which I personally find unconvincing, is thinking in terms of OOP vs functional programming. Procedural isn't the opposite of OO; objects are composed of many procedures.
I wasn't intending to trigger anyone but it seems I have. Perhaps I should have stated it as, "why OOP instead of organizing the code through other means?" It's a genuine question I have, and it just hasn't clicked for me yet, personally. So it's not really trying to be convincing at all; I'm just sharing my experience.
The most compelling scenario for OOP is situations where, in its absence, a developer would tend towards reimplementing some OOP concepts over time. I think it is a natural fit for game development and some other things, but I don't think it melds so well with that thing we call 'business logic'.
OOP works well when state and behavior are tightly coupled, particularly for things which are naturally modeled as state machines. Device drivers tend to be perfectly suited to OOP organization: the underlying device has state that needs to be tracked, it has behaviors which can be controlled (usually by sending particular messages), and there are often multiple implementations of the hardware which should all expose the same high-level API. OOP allows bundling the state handling with the control messages needed, exposing a single common API while allowing specialized implementations for particular driver implementations.
For the higher-level code, it's often less useful.
I've never really seen this explicitly stated anywhere, so maybe this is some super idiosyncratic idea on my part, but it's always seemed to me that one of the key pieces of OO is the use of polymorphism as a sort of syntactic/conceptual sugar around function indirection, that many people seem to have an easier time understanding and reasoning about.
> And before anyone goes and comments aksully code organization blabla
I'll bite. Having worked on "enterprise" application the OOP (or more like the dreaded enterprise patterns) have sort of "clicked" to me.
An aside. Git-flow is more than decade old at this point and people still have heated debates from time to time on whether that approach is any good. In essence git-flow merely introduces organization to patchsets. If you are building a SaaS like system, most probably every environment you have runs "current" code and each stage (prod, test, dev) is merely a shifted pointer that will eventually catch up. Why would you structure your teamwork around identifiable patchsets, why would you keep that information around when you do trunk-based development? There is seemingly no need to do that and you would rightfully diss on git-flow as overly complex.
However, if you build hardware appliances or installable offline application with multiple versions (think e.g. MS Office in 2000-2010 - multiple versions and tiers of those out in the wild under support), suddenly there are situations where you do in fact need to checkout and work on an exact patchset that is installed at customer site. If your field requires any form of certification, you will quickly learn that having patchsets provably not touching frozen (already certified or submitted for certification) code is highly valuable. Ability to cherry-pick a hotfix can save organization man-months.
Back to code organization. So you have this "enterprise" application that has various software modules/components, some of which are built by external contractors, some are off-the-shelf components, some have been left and forgotten five years ago and for one reason or another you want to work on one of these components. Whatever team you assemble, majority will have zero understanding of assumptions baked into code. Documentation, however meticulously maintained, will still leave holes in understanding. How do you make changes to a decade old component that is used in various weird ways all throughout the project with any amount of confidence that you are not breaking stuff at a distance? Apparently, OOP enterprise patterns that introduce decoupling points for implementation details and pass around objects with assumptions abstracted away are pretty solid barriers. They are cumbersome, difficult to use at early stages of development when all complexity of the task can fit into the heads of programmers implementing features. Similarly to git-flow, the advantages of these structures only become apparent when you start needing them and that will quite probably be when the principal engineers greenfielding the project have long left the company altogether.
A very specific example: factory pattern. Why would you have a layer of indirection to simply instantiate an object? All refcounting shenanigans aside, you just cannot change interface of object/component without affecting call sites. Call sites that may be outside of your control, code frozen or simply too dreadful to touch without strong justification. Factory pattern introduces a decoupling point between call sites and implementation. Having a factory you can implement interface shims and leave call sites untouched while having the freedom to rework component and its interface. You just cannot achieve that without OOP.
> Let alone that the entrenched, canonical, idiomatic coding styles of most modern oop languages encourage state, mutability, nulls, exceptions and god knows how many trivially preventable entire classes of errors.
I will agree that OOP implementations in mainstream languages leave much to be desired. That is part of the price we pay for backwards compatibility.
I remember learning and using OOP in computer science classes and then going to apply it to a game I was working on. I object-ified everything! It felt like it all just made sense.
Then I ran the game. And it crawled at 15fps and I wondered why.
Of course I was new to programming so I can’t just blame OOP but it was my first lesson in how OOP isn’t a magic wand by any means. And later I got into more functional programming and today I find myself writing function-based components in react for a living and loving it. It’s funny how things go like that.
This ten year old essay is still strong. It makes some points that are worth repeating
> I have a hypothesis: this pattern is so common for the simple reason that Java doesn’t have first-class functions.
She’s picking on Java deservedly because Gosling made what in retrospect was a mistake to go so all-in on classes, and because it became such an important pedagogical and deployment language (and nobody will shoot at you if you’re unsuccessful).
Say what you will about C++ but it took a different path, not only changing its name from “C with Classes” but making classes a tool for manipulating the type system (and not the only one) and supporting the true benefit of a class system, generics.
Edit: I changed an incorrect "He" pronoun to "She". Thanks to quickthrower2 for pointing out this embarassing blunder.
The many "you're wrong if you believe that" made me remember the anecdote of Alan Kay attending some conference about OOP and saying that it was wrong, that he invented OOP and it was not like that.
IIRC, Kay's vision is that OOP is about messages.
I have my own pet theory, of course it must be wrong too, but if I may, only as food for thought: The core usefulness of OOP is usability. A language is a matter of connecting thoughts through a mental model. Subject, verb, predicate. Object, method, parameters. That's mostly it. The rest are implementation details and lots of bikeshedding.
This is the reason I find it useful. To me, OOP is as much about your organization as it is about the best way to load, transform, present, edit, and store data. I think the culture of some companies lends itself to various kinds of programming, but it's the cultureless companies where OOP is most useful. The places where nobody is trying to change the world, where people work to pay their mortgages, where an executive may only work for two years and a programmer may only work for six months.
It's in an environment like that where a self-documenting, self-configuring code base with custom classes and exceptions that guide the next developer is essential.
Every developer should have two users in mind. The person using the software, and the next developer who maintains the software after you're gone. OOP is a great way to empower the second user when the only thing that will reliably outlive the developer is the code base.
I think that OOP is the best way to accomplish the goals you list only if you stick to languages with mainstream appeal - but think it's sad that's the state of the world. Objects (as they are used in e.g. Java) default to stateful and complect data with the methods on that data. I find that most of the time what I want is closer to OCaml's Modules[0] which give me many of the tools of code organization without the complexity with state. (note that OCaml allows objects, so you get a real sense for how often you want an object over a module, 95% of the time I wanted a module).
This is one thing I like about Scala. While its classes can be used in the exact same manner as Java, that's not how its creators pitch them. I've seen them advocate using classes/objects akin to ML's modules, but there's enough flexibility to pivot if for some reason that does not make sense.
I want to second the idea that the primary benefit of OOP is logical organization.
For smaller projects I don't care one way or another about OOP practices, but once you start getting into hundreds of thousands of lines of code IMO it becomes an absolute necessity.
But we don't need classes, vector tables, or other runtime features to achieve organisation. We just need our compiler to recognise namespaces. "module Foo where"
Objects open up some advantages in how you snap together code that namespaces alone do not.
Code is much easier to deal with when you define abstractions around small sets of functionality and then allow the caller to pass in various objects that provide those functions to the code that needs it.
You can have an application that accepts a 'data_backend', and then provide a data_backend that just stores information into a dict for testing and getting the app initially written, one that tracks all of the changes made or exposes them for tests to check, or another that stores it to sqlite for running a local instance, and another that stores it to some real database for a larger one.
The calling code doesn't need to know what the data_backend does or how it works, it just tells it "store this", "read that" and the data_backend does whatever it does and data goes in and out of it.
You build up all your code that way and you'll be able to easily stub chunks and replace them with functional implementations, and then swap those out when needs change or by options given at runtime.
It's a lot easier to read and write than code that's littered with a million if statements trying to keep track of too much complexity in too many ways all at once.
OOP is just syntax sugar and compiler constraint enforcement for the same kinds of things you see the linux VFS do. There are many filesystems for linux, but each just provides a handful of functions in a structure that should be called against the structures representing the various types of filesystems. In Linux's C you have to slap it all through a (void *), but in languages with objects, you can use those as the medium of abstraction instead of doing it manually.
Some make you do a bunch of inheritance garbage with stapling objects together, others will let you build to interface definitions, or just check the structure of the object to see if it matches the needs of the caller, or be a dynamic language that just checks for the members at runtime.
If I create a REST API no one complains they don't have access to the inner-workings, local variables, etc. But if I give a similar experience and call it a "class" suddenly it's ugly and mean.
I find that's often because classes come with a lot of stuff that is less desirable - mainly inheritance and its assorted complexities.
The other side is that classes aren't the only way to get this sort of encapsulation. The classic example is closures - data inside the closure acts as the encapsulated data, and the returned type of the closure is its public API. ML languages typically use modules in place of classes - the module signature defines the public API, and rather than calling methods, you instead call functions with arguments (not `list.length()` but `length(list)`). But again, that's just syntax - we're keeping the same encapsulation because the module-defined functions are the only ones capable of fiddling with the value's internals. You also see this in Rust, which does use method syntax, but has traits and types that act more like modules than typical classes.
All in all, I don't think anyone's complaining about encapsulation, but rather it's a question of whether typical OOP (with all that that typically includes) is the best form of it.
Everything is wrong or illfitting. I mean we are making tiny blocks of silicon do extreme amounts of human work, while being entertained with music and videos. OOP is like five abstractions deeper than what it intend to model. It's a tool, I wish people stopped trying to make the extreme case of engineering that is software development some sort of axioms around physical law. Our expectations would be way better especially when our bosses require we implement a nosql db that stores rss feeds and use machine learning to sort them in order of "coolness".
There's very few actual rules despite opinionations derived from arbitrary paradigms and what should be done to work cohesively as a team. I wish there were a smidge less ivory towers and a smidge more common language.
> The core usefulness of OOP is usability. A language is a matter of connecting thoughts through a mental model.
Incidentally, this is exactly how I came to "reinvent" OOP as a newb programmer. I was working in a codebase where I didn't know what tools I had at my disposal, but we had a few modules where I could just type `module.` and see a list of all the methods, right there in front of me (in VSCode's intellisense). I asked, "Why don't we do this for our helper functions?" and we ended up with a handful of major objects to import with easily discoverable methods.
now of course I could have crawled the codebase to get the same information, but for someone new to programming and/or someone brand new to the codebase, that isn't necessarily a good use of a time. it can be a lot of slow unraveling of "what goes where", whereas organizing things in "OOP" (I still don't know if I'm actually using this term right because it was just part of how I learned, without being named) teaches that information more quickly through use and experimentation. basically what I was looking for was "namespacing" I guess, but I would also say it helped organize our code in a more useful way.
Runtime, multi-method dispatch is probably the singular thing that stands out as "missing" from other systems. I don't miss it a lot, but there have certainly been times when I have.
I also enjoyed the ability to not be locked into the rigor of a class structure when it comes to methods. Since CLOS is function based, it's trivial to add functions to a class that you don't even have the source code to.
There have been many times working with a 3rd party or system library where I've had that "if only I had this little method" moment that would make my life easier. I'd rather have that capability and fight, say, namespace issues for the "Well what if everyone added their own 'upshiftFirstLetter' method to the String class" problems on an ad hoc basis.
Part of this, of course, stems from the locked down nature of the scope of classes. Not having access to internal structures and state. I'd rather take those risks of leveraging internal state knowledge not supported by the original designer vs the alternatives of sometimes having to throw out the baby with the bathwater.
I haven't used it, so not sure if it fits the pattern, but the question for me is: is it useful, time-saving, more usable? does it make the task more clear for the programmer that uses it?
It depends. It is a fundamentally different branch of the OOP family tree than what most people are used to seeing. Enough so that I've seen people declare it to *not be OOP*. So if you stumble across the model having only seen the style of OOP popularized by C++ and later Java, no. You will probably *not* find it to be liberating.
The idea is that you have classes which model state. And then you have generics that model functionality. And you define methods which provide an implementation of the generic for a class. But it's more flexible than what you see in Java because such a method can be easily added after the fact as it's not intrinsically part of the class' definition.
If you're familiar at all with Typeclasses in Haskell & Scala, personally I find those to be similar enough to get the gist. Likewise Dylan and R's S4 objects are modeled after CLOS' structure.
combine a type system with dispatch logic, using abstractions .. it is very clear for some engineering applications and/or tinkertoys. Many people can get the basic ideas with thirty minutes of introduction.
But Smalltalk also has inheritance and metaclasses, so it wasn't just objects passing messages. It was also designed as a visual live programming environment which was basically the entire computer system. Kay had his vision, but Smalltalk was implemented as more than just message passing.
That and Simula proceeded Smalltalk, which C++ was inspired by, even if Kay coined OOP.
> The app is the object, and the various URL handlers are its state.
There were a lot of assertions in this article that rubbed me wrong, but this one was particularly egregious. Handlers are behavior. They handle something.
This feels like the kind of article I would've written early in my career when I was getting really clever with stuff and starting to be able to question the way I understood the world of programming.
I think it's saying that the state of the app object itself is the configuration of the linkage between paths, verbs and handler functions, and that's why it's worth making an app an object.
The Handlers implement behavior, sure. But from the perspective of the `app` object, the handler function objects are state. You can tell, because `app` is an instance of the Flask class and the Handlers are added to the instance by way of a function call.
So, Handlers are functions that "handle something", but they are also part of the app object's state, not its behavior.
Handlers are functions that are part of the app object's state, which alters the app's behavior.
In the majority (possibly overwhelming majority) of the cases this is true. The purpose of Handlers and similar Publisher/Subscriber patterns is effectively to change application behavior when they are configured.
> There were a lot of assertions in this article that rubbed me wrong, but this one was particularly egregious. Handlers are behavior. They handle something.
class App {
public IHandlerFoo Foo; // routed as /app/Foo
public IHandlerBar Bar; // routed as /app/Bar
...
}
...
App.Foo = new HandlerFooInstance(...);
...
Foo, Bar, etc. can evolve under an app's state changes, so the handlers to invoke are states of the app. They implement behaviour too, yes, but the original wording isn't wrong either.
I actually like "dumb" service classes with 1 method. Usually that method may require tens of dependencies to function (db connection, authorization logic, etc.), which in turn have their own dependencies recursively, and your choices are: 1) use global variables/functions for dependencies, 2) pass all dependencies as arguments to your function, 3) inject dependencies in the constructor of a service class/functor. Global state is a no-no in multithreaded code and is poorly manageable (side effects and all), while forcing a client of your function to pass all required dependencies manually as arguments is cumbersome and basically leaks implementation details (hard to refactor). So for me, service classes are the best option. It's kind of equivalent to a closure (object fields are basically captured variables), just written using the familiar OOP syntax (OOP syntax != OOP semantics). In the context of web, I tend to prefer stateless architectures (because they are easily scalable and less error-prone), and coupling behavior with state often goes in the way, in my experience. So my favorite architecture is a network of service classes whose dependencies are injected in their constructors and which are basically "smart" functions (not "objects"), they have no state, and they operate on domain objects which have no logic other than basic self-validation during mutation (to uphold invariants). I know it's a heated topic (rich model vs. anemic) but I've seen projects naturally, through evolution, end up being more anemic than rich. Pure OOP, where state and behavior are entangled, feels very cumbersome to me, it's harder to refactor when requirements change every day, and having too much state is pretty error-prone. YMMV.
Exactly! I find myself writing out the same code with classes and functions to show this to folks. I still prefer using classes as it’s easier for me to see at a glance that constructor = DI/curry vs a function returning an object of functions or something. So a class is just a way to communicate a pattern.
I have definitely had the whole "what is the difference between this class <class with constructor and one method> and this function that receives the same as the class constructor returns a function implementing the method" conversation with my teammates before!
"A closure is just a method where the instance members are implicit. A method is just a closure where the closed over state is explicit."
I agree with you about service classes. I think one of the things that service classes do well that regular functional code does not is indicate to the consuming developer which parameters are meant to be provided by the environment and are largely static, and which are meant to vary per invocation.
Lisp does this instead with dynamic scope. But dynamically scoped dependencies felt too much like global variables to me. The service class instance has its scope bound to it and it won't change. But dynamically scoped functions always felt like the ground could get ripped out from under me at any time.
I do this and advocate for this in the teams I work in. In my experience, it works well.
Service classes are "configurable functions", as I tell my team. With the advantage that you don't have to monkey-patch an import in another module to do a unitary test of the function, but rather you can inject mocks!
One thing I do insist on is that a service class either does something (calculate some value, retrieve some information, etc.) xor it coordinates service classes (parse value, then call the validator and, finally, persist value). This aims to prevent deep call chains where every function in between does a little bit and calls someone else, which can be hard to reason about.
I recently escaped an organization that had a couple of apps built using this approach everywhere but for the opossite reason stated in the article. The team wanted to write code in a functional way in Java because _FP is better than OO_.
Most (if not all) classes had a single method and implemented BiFunction. The app was wired with Spring... if you had the bad luck of using Spring you can get the idea of how ugly this was. All method invocations were `apply`s and navigating the code was as easy as finding a needle in a haystack in the middle of the night. A good amount of the wonderful features of a tool like IntelliJ couldn't be used.
My rule of thumb is to follow the grain of the language / framework / library as to produce as little surprise to other devs.
This is really it. It's not a game of all-OO, zero-OO or strictly following a pattern you saw on YouTube that one time. It's all about nuance and minimizing the # of moving parts until each is really serving you some value.
I am in the process of ripping out a bunch of class bloat in a product iteration. Our typical pile of model class POCO spam for "one thing":
Entity.cs --The "official" view
EntityController.cs
EntityView.cs --Another common view that became popular over time (???)
EntityTableRow.cs --gotta have that 1:1 sql model, amirite?
EntityViewForTypeA.cs --what was wrong with the common one :(
EntityViewForTypeB.cs --And again...
...
Ultimately, every time we wanted a "view" of the data, we wound up creating another type to enshrine this and then tried to figure out how many holes we could more-or-less hammer it through. Every one of these has some cranky mapping layer too (NxM if you want to get crazy).
The alternative pattern I am working with is to directly query my database in the exact place the data is required and to use anonymous / scalar return types like so:
//Some function that produces a HTML partial view for a webapp
//View w/ join that would otherwise require a new POCO type to communicate.
var myEntityView = await sql.QueryFirstAsync(@$"
SELECT e.Id AS Id, ep.Name as Name
FROM Entities e, EntityProperties ep
WHERE e.Id = ep.EntityId
AND e.Id = @Id", ...);
//Direct usage of anonymous result type.
return $@"
<h3>Entity:
<a href='Entities/{myEntityView.Id}'>{myEntityView.Name}</a>
</h3>";
In the above, there are zero POCOs or "messenger" types. Just a raw query right where it needs to be taken, without any extra ceremony or implications for other code sites. I actually don't have a single type in the new codebase that is a POCO, other than schema tracking models (which are still useful for interpolating into queries to enforce magic strings). Effectively, the only domain data types are 1:1 with the SQL schema now.
>This is really it. It's not a game of all-OO, zero-OO or strictly following a pattern you saw on YouTube that one time. It's all about nuance and minimizing the # of moving parts until each is really serving you some value.
Minimizing the number of moving parts shouldn't be your goal.
Testability of the moving/complicated parts should be.
I don't really care about how you break up your code, but it should be trivially testable, at both the unit, and the integration test level. OOP makes it at least possible for less experienced developers to accomplish both of those things.
It's nice that you got rid of all the indirection and all the nearly-pass-through interfaces in your code sample, but how would you test it? In a unit test? In an integration test? Are you going to need to dial out to a real, live SQL database to do so? For every test? Will you be re-using the database across multiple tests, and thus, require all your tests to avoid object collisions? Paying a startup cost for every test you run? Reusing it for some tests? Who's going to maintain that reused, QA database? Who's going to deal with it when some clown pollutes it with garbage?
These are all solvable problems, they have multiple approaches to solving them, with varying tradeoffs, but they are tradeoffs. 'Simpler-to-write' code is usually not the tradeoff you want to optimize for. You write code once, you test it thousands of times.
> It's nice that you got rid of all the indirection and all the nearly-pass-through interfaces in your code sample, but how would you test it?
> Are you going to need to dial out to a real, live SQL database to do so? For every test?
Yes. This is actually how we do it. And if you think deeply about it, this allows you to skip seamlessly between unit & integration testing, assuming all of the system components are designed against the same database. The only meaningful difference between unit & integration in this context is how much state you allow to accumulate in the database between invocations. You can reset for each method using a known expected initial state for each, or you can stand up an initial state and fly through an entire playlist of method calls.
If all of your functions take a SqlConnection object as their first argument and you've made sure that 100% of application state resides in the database, what can't be tested in this manner?
The usual reason is that if you actually require a disk-based DB to exist to run your tests it affects a) how easy it is to run them (e.g. on a CI build agent that's probably a light weight container with minimal software installed etc.) b) how quickly they run - if I can't run my entire test suite in less than a couple of minutes it's probably reducing productivity c) how much set-up code my tests need and d) it needs some mechanism to ensure tests can still safely run in parallel, esp. if your database schema isn't designed to cope with multiple independent instances sharing a DB - and having each test create and use its very own DB is almost certainly going to impact performance.
Being able to use a memory-based DB that still supports enough of the SQL your code uses is a decent compromise much of the time, but it's annoying having to write tests for 90% of your functionality one way and 10% another because it relies on behavior that's different between your memory-based and disk-based DBs.
Personally I still prefer to able to unit test code as much as possible without needing a DB at all, which generally means separating out the code that transfers state between the DB and in-memory structures and code that operates on said structures, but there are definitely times that's not entirely practical. Either way, I can't imagine being keen about mixing UI level code with SQL queries, even if I understand your annoyance with having to define a bunch of DTOs just to handle specific queries (I will say that typescript is actually pretty good for that sort of thing, but not sure if it could help in your case).
> In the above, there are zero POCOs or "messenger" types. Just a raw query right where it needs to be taken, without any extra ceremony or implications for other code sites.
This is a terrible idea IMO. Creating a POCO per view is not a big deal, you're already implicitly depending on a specific type contract in the view, except now it's implicit and dynamically typed and so subject to all of the problems that follow from that.
Maybe I've been ruined by early exposure to C++, back when C++ was the new hotness, but I find encapsulation and even the namespacing aspect to be helpful. Encapsulation is obviously helpful, but it also comes in handy for some algorithms that are complex enough to require several functions, but which share a lot of state. If I recall correctly, Voronoi tesselation is one of these. Even though an "tesselate" is an action, putting it in a class just makes it easier to think about things. Otherwise as a user of the function I have to first create the shared data object, then know to call the three helper functions. Better would be a high-level tesselate() call, but then that call needs to do that. Now you've just pushed the complexity down to the next maintainer, who needs to figure out how all this soup of functions from all the various algorithms and whatnot that have collected over the years relate to each other. With an object, the newbie maintainer at least knows what the topic of the functions is.
I wonder if at some point, we could train an LLM to churn out hot takes about why this one design pattern is awesome but that other design pattern is horrible and why you need to die in a fire if you use MVC but everyone should be using MVVCP instead but do stay away from MVPVC or your life will be wasted. Then we could train a second LLM to read the first LLM's articles and write flaming rebuttals and have the first LLM write rebuttals to the rebuttals. And if everything works, we could finally automate the entire debate away and go on writing code.
We could train our vacuums to utter snarky comments and eat popcorn while they watch these LLMs. Then we can go back to do something productive like gardening work.
Current LLMs are trained on internet data which is mostly hot takes including the ones you mentioned. I'm not sure there is much room for improvement and what you suggested will probably work well with existing LLMs without any specialized training.
Meanwhile, in busytown…millions of developers will wake up on Monday morning, sit down to their work and make something using the best tools they know.
Some of us would prefer spend our days with WhatsApp server's elegance rather than than the Facebook server's ugliness. When you have to write a compiler for a poorly designed dynamic language just to get decent performance because there isn't a feasible plan to migrate to something that truly solves the problem you have, you can't help but think that maybe things could be simpler.
Here's my hot take. Most patterns are incredibly awful and abused because people prefer to solve architectural related problems in code and produce "elegant" solutions over delivering features.
Here's my pattern that I use instead of a MVC/MVVM/MVP/etc/etc: I have a handler function to choose what url and routine I need and it calls some functions and methods and then renders some json / templates out some html.
I think a good rule of OOP is to try to write everything out as functionally pure as possible and refactor into classes when you see data AND functionality blocked together that you can bottle up into an object. To start ranting, I think a lot of the OOP insanity you see in .net/java/etc is from dogmatic approaches to unit testing over there and the common examples of how to do it involving the mocking and interfacing that end up obfuscating the actual production code.
You can kinda argue whatever you want when it comes to this stuff. It doesn't really matter. Is the code readable? Does it not make me angry to look at? Yes? Then its good enough. I don't care if the object has one method that probably should be a function. I can still read it fine.
I agree. The problem these OO patterns tend to introduce is unneeded levels of abstraction that all boil down to making the code difficult to read without three wide screen monitors.
I don't see this mentioned, but on python a file can be considered a singleton class. If you come from java, this may help you write better python code. No need to encapsulate things on a class! The file is the singleton itself!
It's how the majority of the stdlib is written. Construction of objects with the stdlib is pretty limited to things that allocate os resources (files, sockets, etc with obvious state full ness and destruction behavior) unless you're already operating on objects (dict views, iterators).
The exceptions are those which were ported (unittest was a clone of java 's framework incl method names).
>Object-oriented programming is about objects: bundles of state and behavior.
That means hiding state, and having it spread throughout the app, in places that are hard to find, which is error prone and leads to complexity. And monstrosities like FactoryFactoryFactory [0] and Kingdom of nouns.
Golang has first class support for bundling state and behavior: Receiver-Functions, often called methods. This means exactly 2 things; a) That for a type `Foo` I can do this:
f := Foo{}
f.funcThatUsesFoo()
instead of this:
f := Foo()
funcThatTakesFooAsFirstArgument(f)
and b) that Foo implements all interfaces whos method signatures it's receiver functions satisfy.
It doesn't mean that I now have to suddenly drown my code in over-architectured patterns just to satisfy some OOP sense of code-aesthetic. It doesn't mean that I have to try and hide state from code in the same module. It doesn't mean that I have to bundle things that could be free-standing functions into some type just because. It doesn't mean I have to make any of the mistakes OOP popularized over the years.
Last pop quiz: what makes Python an object-oriented language?
Ah, hm. It can’t be classes, or I’d tell you you’re wrong. So what is it? [...]
self? No, that’s not a keyword or anything; it’s just the de facto standard name
for the first argument. So what makes self work?
That’s close enough, really. The answer is descriptors, which are basically “the
things that make self work”.
From another blog post on the same site [1]:
A descriptor is just an object; there’s nothing inherently special about it.
Like many powerful Python features, they’re surprisingly simple. To get the
descriptor behavior, only three conditions need to be met:
1. You have a new-style class. [...]
It therefore follows logically that Python code using old-style classes is not object-oriented, and that Python was not an object-oriented language prior to the introduction of descriptors.
In the first case, she's mentioning descriptors to make the same point as for "this" with javascript: you can bind a set of related data to a function.
Exposing "descriptors" is a latter thing, but their default implementation was already a thing before they were visible to python users.
A counter example of a good bag of function is every languages Math static class.
Also it is quite viable to have a bag of data and multiple different classes which have algorithm which operate on that data. Those algorithms should not be trivial (but there is no good definition of what trivial is or isn't). If you toss all those algorithms into the bag-o-data class you will wind up with a massive class which changes under too many circumstances and clearly violates separation of concerns.
If you have a three line function that should clearly not go into its own class. If you have a dozen methods that implement an algorithm on a class, and the private methods don't need to be shared amongst other algorithms, and perhaps you have some private state dedicated to just that algorithm that doesn't make sense to have in the data-holding class, then you should probably extract that into a stand alone class. Where is the actual dividing line? I don't know. Wrestling with it all right now with some code that I've been writing for the past several weeks. It doesn't help that I don't yet have a second algorithm that hits this data, so there's no good dividing line for me between what should be private to the algorithm and what should be public hanging off the data.
It is not anywhere near as clear as the author of this blog makes it out to be though.
And the algorithm I'm implementing is a Job, which inherits from BackgroundJob as well. The horrors.
(I do at least have three different BackgroundJobs now which all need common task handling and cancellation/timeout kinds of logic wrapped around them so that's a bit more clear what that abstract base class should look like).
> It’s nothing. There’s no way to describe it without sounding like a blowhard. It’s not “a controller for some URL space”, because that’s what the class is. An instance of it is utterly meaningless!
IMO, this is being almost willfully obtuse and pointlessly pedantic.
For one thing, the instance, not the class, is the thing most clearly described as the controller in this case. It's true that there's no need to have instances for this static logic, but it's also true that some routing behaviors have their own dynamic data (like caches or database connections, etc.) So a point of just always using instances is that you don't need to know or care externally or ahead of time whether or not such data exists (and to provide something better than singletons or worse things to manage it when it does.)
I'm not a big fan of this form, BTW, though it's not on OO grounds. It's a code and logic organization thing. When I've seen this in real life it turns into spaghetti code of little objects connected in a tangle. I think it goes down like this: people start by encapsulating little steps of processing, I guess because they image they are independent. At this point it's OK -- a sequence of objects connected in a chain of references matching the processing steps. Then they realize they aren't independent, but instead of unwinding the chain, they add more objects -- to hold bits of common context -- and connect them here and there. The spaghetti is starting to tangle. (Mistakes are always made at this point -- e.g. there's ends up no coherent way to update this common data as things change.) The thing grows 10x and these kinds of things are repeated ten times and you have a big ball of spaghetti. Oh, and someone introduced a DI framework to solve these, which helps a little, but also introduces its own complications.
My personal opinion is the problem people are solving moved from what used to be called "desktop applications", for which OOP is a good solution, to other problems. Also, the advantages of HTML/JS (cross-platform, ease of distributing updates) is so great that native apps have fallen out of favor. On the server side and for things like ML/AI, OOP is less helpful, hence the rise of FP. Unfortunately, the people developing web frameworks apparently did not have much experience with native GUI libraries, which all work more or less the same, and they decided that a functional programming style with React was the way to go. So now you have a soup of React hooks and JS callbacks that makes code as readable as something with goto everywhere. But each individual page is rarely complex enough for this to make maintenance more than slower / more tedious than necessary. OOP is old-fashioned now. React is the way.
Depends on how you view OOP. For instance "The Repeated Deaths of OOP" (2015) [1] notes how the definition of OOP has been morphing over time to match current realities.
One nitpick from the article, PHP does have first class support for functions in the language. Functions are available in a few different ways:
- Normal function declaration in global and "module"/namespace scope.
- Anonymous functions that can also be assigned to variables (except in a few cases, like object properties)
- Anything extending the Closure class
The last one is obviously a nod at what is going on under the hood, but as far as the developer writing the code is concerned, it doesn't matter.
A library even exists to explicitly provide some functional programming functions under a namespace for easy inclusion with other projects. There is no requirement that all functions must be defined in a namespace, as evidenced by the first 10 years of crappy PHP code littered with functions in the global namespace!
I find it funny that people that know almost nothing about C++ talk about it so much. Function pointers are not the only way to treat functions as values, as is implied in the article. Lambdas exist and have existed in 2013 as well (and are nowadays the canonical way to have something like first-class functions).
And what does this mean?
Quick: off the top of your head, what makes JavaScript an object-oriented language?
If you thought “what? it’s not!” then there is no hope for you and you should go back to C++.
Ironically it's the C++ programmers that have pretty much collectively understood that classes are for maintaining invariants, which this person is almost about to understand in this article (I hope they did by now).
You can write object-oriented code in C, but no matter what tricks you do with storing function pointers in structs, you still have to pass the struct itself as an explicit argument.
A sprinkle of clang blocks and Block_copy on top of your structs and you’re on your way!
OOP is a tool like any other, and it has real costs and benefits. The original idea of being able to easily add new common APIs via inheritance is very powerful and widely applicable. In fact both Haskell and Rust wound up with a similar mechanism to inheritance (the 'deriving' annotation) to reduce the pain of problems like "I don't want to have to write code to define equality for every struct" which are hard for the purely-functional approach to solve.
Neither Haskell or Rust use inheritance for this though. They use Typeclasses or Traits respectively. Those have the nice property that they solve the specific problem of satisfying an interface for something without the accompanying problems of inheritance. Derive as an example doesn't use inheritance it uses code generation precisely because Rust wants to avoid inheritance.
The key insight of eevee's article is that OO has succeeded mostly because bundling state and behavior together is a really useful way to structure your code and is the thing that you should leverage the most out of object oriented languages.
`derive` macros would not be possible to imitate with inheritance. `derive(PartialEq)`, for instance, examines all the fields of a struct definition and generates a function which compares them all. Using inheritance, you could inherit from a parent `PartialEquivalence` class, but if you added any new fields, you'd have to write your new implementation of the `==` operation by hand to accommodate them.
Macros have a much longer history in purely functional languages (in particular, lisps) than they do in procedural or OO languages. So you could argue that this problem is actually much easier to solve from a functional approach.
Is the problem here not simply that Python does not have a convenient way to namespace functions except a class declaration? The `module` keyword in e.g., Julia or Rust seems like what's desired to avoid this phenomenon of "a class which is just a bundle of static methods".
You don't have to use __init__.py -- you can just create any python file and it will be a module.
And I'm not sure why you think that's unergonomic. If you want a bunch of functions to share the same namespace, just put them in the same file. The name of the file becomes your module name.
The author of the article also writes about “good classes.” She criticizes two specific mistakes one can make in Python classes (and she is right), but not the OOP itself.
In my opinion you're not really doing object-oriented programming unless each object is also a separate process. This is why the best object-oriented programming uses functional programming languages and the actor model (i.e., Erlang). Just having state and methods, but without having an associated isolated process means you end up in situations in which multiple processes may fiddling with the same objects and all your encapsulation of data just gets you in a twist. It's only when you commit fully to functional programming and have immutable state that you truly unlock the power of object-oriented programming.
> without having an associated isolated process means you end up in situations in which multiple processes may fiddling with the same objects and all your encapsulation of data just gets you in a twist.
1. Don't do that. Don't have multiple threads messing with the same data.
2. If you have to do that, use a mutex/semaphore/guarded region.
So far, so conventional. There's nothing about OOP in that.
The difference with OOP is that only member functions can access that data, so you have a strictly limited number of places you have to think about protecting. OOP makes that aspect easier. Each class just protects its own data.
Of course, you still have to worry about "deadly embrace", and by putting the semaphores in the class, it can make that aspect harder to reason about.
> It's only when you commit fully to functional programming and have immutable state that you truly unlock the power of object-oriented programming.
Um, no. Almost all real programs have mutable state, and the state has to live somewhere.
I work in embedded systems. There's often a lot of persistent state, and, worse, shared mutable persistent state. FP isn't going to make that go away; it's just going to distribute it differently. But if I have to reason about the shared mutable state, then a distribution that pretends it isn't there doesn't make that easier. (Pure functions, on the other hand, do help.)
I mean, look, FP is absolutely right that shared mutable state is evil. Avoid it... if you can. (And think harder about whether you can.) But sometimes the very nature of the problem is shared mutable state, and when that happens, you need tools that help you deal with it, not tools that say "don't do that".
> Um, no. Almost all real programs have mutable state, and the state has to live somewhere.
The state lives in the process and you can have millions of processes if you'd like. If you want to get some bit of state then just send a message to the process that manages that state and it will give it to you. It's like traditional object oriented programming except the objects are alive.
There's a reason Erlang works the way it does, but not every problem will make sense to solve with a million object processes communicating with each other.
For when the main thrust of the program is to simulate interactions between objects. I'm not the biggest general fan of the style, but that is where it shines.
Maybe because there are a lot of times where it really is more convenient to just update a variable or print a string instead of jumping through a million hoops to show off how pure everything is? Or we just want to implement quicksort without having a seizure?
Ok, you prefer to deal with entire classes of bugs in order to keep using objects because you incorrectly think that objects model the world better. I used to have the same perspective, and all I can say is that you'll either grow out of it or you won't.
Objects aren't an ideal tool for modeling the world, and immutable objects and variables even less so. Objects and classes often come in handy for modeling something obviously fake, like a video game.
I've never thought, 'Gee, this game would be so much easier to program if I had to copy every data structure when I wanted to update it, and have programs be impossible to step through because I'd rather write a pure program with no good debugging tools just to sound cool.'
It is interesting that you think functional programming is somehow more difficult or less powerful than object-oriented programming. I have found the opposite to be the case. Functional programming makes it easier for me to debug programs, frankly. That's why I like it. Once I got the hang of it it honestly felt like a huge weight off my shoulders.
A little bird told me that it was easier to break up big projects into classes and farm them out than in other languages at the time. The `protected` keyword for example prevented the other developer(s) from modifying that part of the object, creating safety.
She seems quite deeply knowledgeable about a lot of different languages which is impressive. Or much research was done. I just don’t have that much stuff stick especially for languages I don’t use every day.
It's a good thing the greatest minds of programming have worked hard to solve this problem by giving us the even more platonic and rigid functional programming.
Well Simula was developed specifically to model real world problems. And when they had invented it they found it much more useful than that which had preceded it. OO is not supposed to model nature. It's not string theory or whatever other incomplete-yet-closer-to-reality-than-oo model that physisicts use.
But what about how humans think? We make use of universals in our language all the time. And sometimes that fits a problem domain well, like making a GUI or running a simulation of things.
we don't really think in terms of OOP, OOP is an abstraction and systematization of inheritance, which we do use to think sometimes, but as its abstracted it becomes untennable to use ...
there are relations between things that can be mapped well with inheritance, but thats a problem, we don't really want code to represent the thing we're coding, code needs to be more dynamic easier to read, change and move around, etc. representation should be between the end result of the code and the thing we're trying to represent
It is interesting that Java is mentioned there as a language without first-class functions. Maybe, but from programmer‘s perspective it kinda does and they are encapsulated in «namespaces»: they are static methods of classes and with ˋimport staticˋ they do look like first-class functions.
What do you mean? Since Java 8 you can assign a function to a variable, pass it as a parameter etc. Lambdas in Java are not pure, since they are converted to objects by compiler, but on user level they have all attributes of first class functions.
Well... you can define an interface, and have a class that implements the interface, and pass an instance of that class as a variable. That's not quite "passing a function as a variable", but it's kind of close if you squint.
To this day, oop advocates can't even agree on what oop even is or means.
Apparently oop as envisioned by Alan Kay was supposed to work like cells in the body that pass messages between each other and take actions independently.
Why? Who knows! It was never really explained why literally one of the most complex systems imaginable, one that we still really have very little idea how it even works, should be the model for what could and should probably be a lot simpler.
Today's modern oop languages are probably very far from what Kay envisioned (whatever that was), but it's remains unclear why classes and objects are "better" than the alternatives.
And before anyone goes and comments aksully code organization blabla like yes but code organization can be great or shit in oop or fp or procedural codebases, it has nothing to do with the "paradigm".
Let alone that the entrenched, canonical, idiomatic coding styles of most modern oop languages encourage state, mutability, nulls, exceptions and god knows how many trivially preventable entire classes of errors. Granted, most have now started to come around and are adopting more fp features and ideas every year, but still.
Don't get me wrong, writing programs like cells in the body that pass messages between each other and take actions independently is an interesting idea which deserves pursuing, if nothing else but to satisfy our curiosity and seeing to what if anything it's applicable and suited. (and even if the answer turns out to be "nothing", we've still learned something!)
But going from there to making strong claims about it being a more or less universally superior paradigm for computing and writing code, with little to zero evidence, that's a huge, huge stretch.
To the degree Erlang and Actors work, I think that's kind of a happy coincidence, and not due to any rigorous work on Alan Kay's part.