First time I have heard of object-oriented obfuscation.
I get it, but in general I don't get the OO hate.
It's all about the problem domain imo. I can't imagine building something like a graphics framework without some subtyping.
Unfortunately, people often use crap examples for OO. The worst is probably employee, where employee and contractor are subtypes of worker, or some other chicanery like that.
Of course in the real world a person can be both employee and contractor at the same time, can flit between those roles and many others, can temporarily park a role (e.g sabbatical) and many other permutations, all while maintaining history and even allowing for corrections of said history.
It would be hard to find any domain less suited to OO that HR records. I think these terrible examples are a primary reason for some people believing that OO is useless or worse than useless.
For me, it's the fact that the mess of DAOs and Factories that constituted "enterprise" Java in the 00s was a special kind of hellscape that was actively encouraged by the design of the language.
Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.
It was terrible and taught me to avoid applying for jobs that used Java.
I like OOP and often use it. But mostly just as an encapsulation of functionality, and I never use interfaces or the like.
As someone coding since 1986 it is always kind of interesting how Java gets the hate for something that it never started, and was already common in the industry even before Oak became an idea.
To the point that there are people that will assert the GoF book, published before Java was invented, actually contains Java in it.
People did it, some times, when they needed it.
It was so rare that the GoF though they needed to write a book to teach people how to use those patterns when they eventually find them.
But after the book was published, those patterns became "advanced programming that is worth testing for in job interviews", and people started to code for their CVs. The same happened briefly with refactoring, and for much longer with unit tests and the other XP activities (like TDD).
At the same time, Java's popularity was exploding on enterprise software.
It was, but still the book did not magically changed from Smalltalk and C++ into Java.
It is probably because Java took this design philosophy (or I should say dogma) to heart as its very syntax and structure encourages to write code like that. One example: It does not have proper modules. Modules, the one thing that most people can agree upon that are a good thing, enabling modularity, literally. Another one: You cannot have simply a function in a module. Shit needs to be inside classes or mixed up with other unrelated concepts. Java the language encourages this kind of madness.
It is called packages. There is nothing on the modules as programming concept that requires the existence of functions as entity.
Again, Smalltalk did it first, and is actually one of the languages on the famous GoF book, used to create all the OOP patterns people complain about, the other being C++.
> There is nothing on the modules as programming concept that requires the existence of functions as entity.
I didn't claim it does. To make the point though: bare functions are a much simpler building block, and a much cleaner building block than classes. Classes by their nature put state and behavior in one place. If one doesn't need that, then a class is actually not the right concept to go for (assuming one has the choice, which one doesn't in Java). A few constants and a bunch of functions would be a simpler and fully sufficient concept in that case. And how does one group those? Well, a module.
In Java you are basically forced to make unnecessary classes, that only have static functions as members, to achieve a similar simplicity, but then you still got that ugly class thing thrown in unnecessarily.
In a few other languages maybe things are based on different things than functions. Like words in Forth or something. But even they can be interpreted to be functions, with a few implicit arguments. And you can just write them down. No need to put them into some class or some blabliblub concept.
From type systems theory point of view, a class is an extensible module that can be used as a variable.
As mentioned in another reply, Java did not invent this, it was building upon Smalltalk and SELF, with a little bit of Objective-C on the side, and C++ like syntax.
Try to create a single function in Smalltalk, or SELF.
http://stephane.ducasse.free.fr/FreeBooks.html
https://www.strongtalk.org/
https://selflanguage.org/
It is also no accident that when Java came into the scene, some big Smalltalk names like IBM, one day of the other easily migrated their Smalltalk tooling into Java, and to this day Eclipse still has the same object browser as any Smalltalk environment.
Smalltalk,
https://www.researchgate.net/figure/The-Smalltalk-browser-sh...
Which you will find a certain similarity including with NeXTSTEP navigation tools, and eventually OS X Finder,
The code browser in Eclipse
https://i.sstatic.net/4OFEM.png
By the way, in OOP languages like Python, even functions are objects,
> The code browser in Eclipse
> https://i.sstatic.net/4OFEM.png
«
Error 1011 Ray ID: 9973d6cc1badc66a • 2025-10-31 14:28:28 UTC
Access denied
What happened?
The owner of this website (i.sstatic.net) does not allow hotlinking to that resource (/4OFEM.png).
»
In java, it has to be a class in a package. Packages are sane enough. That isnt the point.
That is the point, packages the Java programming language feature for the CS concept of modules.
https://en.wikipedia.org/wiki/Modular_programming
> Languages that formally support the module concept include Ada, ALGOL, BlitzMax, C++, C#, Clojure, COBOL, Common Lisp, D, Dart, eC, Erlang, Elixir, Elm, F, F#, Fortran, Go, Haskell, IBM/360 Assembler, IBM System/38 and AS/400 Control Language (CL), IBM RPG, Java, Julia, MATLAB, ML, Modula, Modula-2, Modula-3, Morpho, NEWP, Oberon, Oberon-2, Objective-C, OCaml, several Pascal derivatives (Component Pascal, Object Pascal, Turbo Pascal, UCSD Pascal), Perl, PHP, PL/I, PureBasic, Python, R, Ruby,[4] Rust, JavaScript,[5] Visual Basic (.NET) and WebDNA.
If the whole complaint is that you cannot have a bare bones function outside of a class, Java is not alone.
Predating Java by several decades, Smalltalk, StrongTalk, SELF, Eiffel, Sather, BETA.
And naturally lets not forget C#, that came after Java.
Thankfully those days are not with us any more. Java has moved on quite considerably in the last few years.
I think people are still too ready to use massive, hulking frameworks for every little thing, of course, but the worst of the 'enterprise' stuff seems to have been banished.
I hope you are right. I really do. But I have a hunch, that if I accepted any Java job, I would simply have coworkers, who are still stuck with "enterprise" Java ideology, and whose word has more weight than the word of a newcomer. That's one of the fears, that stops me from seriously considering Java shops. Fear of unreasonable coworkers and then being forced to deliver shitty work, that meets their idea of how the code should be written in the most enterprise way they can come up with.
Always makes me think of that AbstractProxyFactorySomething or similar, that I saw in Keycloak, for when you want to implement your own password quality criteria. When you step back a bit and think about what you actually want to have, you realize, that actually all you want is a function, that takes as input a string, and gives as output a boolean, depending on whether the password is strong enough, or fulfills all criteria. Maybe you want to output a list of unmet criteria, if you want to make it complex. But no, it's AbstractProxyFactorySomething.
I don't understand these complaints.
Here is a tiny interface that will do what you need:
Now you can trivially declare a lambda that implements the interface.Example:
I'm personally rather fond of Java, but even this (or the shorter `Predicate`) still can't compete with the straightforward simplicity of a type along the lines of `string -> bool`.
It insists upon itself. That’s really the problem with Java’s design philosophy from that era; it encourages ceremony. Even if you don’t write the full-on "Enterprise™" soup of DAOs, Factories, and ServiceLocators, the language’s type system and conventions gently nudge you toward abstraction layers you don’t actually need.
Interfaces for everything, abstract classes “just in case,” dependency injection frameworks that exist mainly to manage all the interfaces. Java (and often Enterprise C#) is all scaffolding built to appease the compiler and the ideology of “extensibility” before there’s any actual complexity to extend.
You can write clean, functional, concise Java today, especially with records, pattern matching, and lambdas, but the culture around the language was forged in a time when verbosity was king.
I think the best description of this kind of "obfuscation" that especially afflicted Java still is Steve Yegge's "Kingdom of Nouns" rant:
https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
It's very useful in C++, funnily enough. This is because I can have a non-templated interface base class, then a templated impl class.
Then my templated impl header can be very heavy without killing my build times since only the interface base class is #included.
Not sure if this is as common in Java.
Using more complex architecture (which requires more human time to understand) to merely make build time shorter is a ridiculous choice.
For a large project this could save hours of developer time.
C++ is a hell of a language.
just buy your devs faster computers to compile on
You can't buy your way out of this, because C++ builds are only parallelizable across multiple translation units[1] (i.e. separate .cpp files). Unless you're willing to build a better single-core CPU, there's not much you can do.
The challenge with modern C++ projects is that every individual TU can take forever to build because it involves parsing massive header files. Oftentimes you can make this faster with "unity builds" that combine multiple C++ files into a single TU since the individual .cpp file's build time is negligible compared to your chonky headers.
The reason the header files are so massive is because using a templated entity (function or class) requires seeing the ENTIRE DEFINITION at the point of use, because otherwise the compiler doesn't know if the substitution will be successful. You can't forward declare a templated entity like you would with normal code.[2]
If you want to avoid including these definitions, you create an abstract interface and inherit from that in your templated implementations, then pass the abstract interface around.
[1] or linking with mold
[2] There used to be a feature that allowed forward declaring templated entities called "export". A single compiler tried to implement it and it was such a failure it was removed from the language. https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14...
Java uses type erasure which are very cheap in compile time but you cannot do things like
C++ uses reified generics which are heavy on compile time but allows the above.That's why they're called generic parameters, not template parameters; the code is generic over all possible parameters, not templated for every possible parameter.
Interesting I’d never picked up on this pedantic subtlety. I too thought reified meant what you could do at the call site not what you could do at runtime. Was my understanding wrong, or is Gemini hallucinating.
In any event, you have to use weird (I think “unsafe”) reflection tricks to get the type info back at runtime in Java. To the point where it makes you think it’s not supported by the language design but rather a clever accident that someone figured out how to abuse.
Funny, given that interfaces are the good part (especially compared to inheritance).
I didn't mean to imply that interfaces are bad or useless. Just that I don't use them. Probably because I write most of my stuff in Python anymore.
"But but... I can swap out my entire my persistance layer since it's all just an interface!"
Has anyone ever actually done this ?
I have used something similar with effects in Haskell to mock "the real world" for running tests.
But if it was as convoluted to use as it's in Java, I wouldn't. And also, it's not enterprise CRUD. Enterprise CRUD resists complex architectures like nothing else.
> Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.
Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.
Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
Perhaps you haven't had the opportunity to experience the advantages of using these techniques, or were you mindful of when you benefited from them. We tend to remember the bad parts and assume the good parts are a given. But personal tastes don't refute the value and usefulness of features you never learned to appreciate.
> > Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.
> Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.
> Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
I don't think GP was saying that Dynamically loaded objects are not needed, or that Interfaces are not needed.
I read it more as "Dynamically loaded interfaces that can be swapped out are not needed".
> Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
The share of all software that actually benefits from this is extremely small. Most web-style software with stateless request/response is better architected for containers and rolling deployments. Most businesses are also completely fine with a few minutes of downtime here and there. For runtime-replacement to be valuable, you need both statefulness and high SLA (99.999+%) requirements.
To be fair, there is indeed a subset of software that is both stateful and with high SLA requirements, where these techniques are useful, so it's good to know about them for those rare cases. There is some pretty compelling software underneath those Java EE servers for the few use-cases that really need them.
But those use-cases are rare.
>You cannot have quality software without these basic testing techniques
Of course you can, wtf?
Mock are often the reason of tests being green and app not working :)
> Of course you can, wtf?
Explain then what is your alternative to unit and integration tests.
> Mock are often the reason of tests being green and app not working :)
I don't think that's a valid assumption. Tests just verify the system under test, and test doubles are there only to provide inputs in a way that isolates your system under test. If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.
Workman and it's tools.
I personally believe mocks are a bad practice which is caused by bad architecture. When different components are intertwined, and you cannot test them in isolation, the good solution is refactoring the code; the bad is using mocks.
For example of such intertwined architecture see Mutter, a window manager of Gnome Shell (a program that manages windows on Linux desktop). A code that handles key presses (accessibility features, shortcuts), needs objects like MetaDisplay or MetaSeat and cannot be tested in isolation; you figuratively need a half of wayland for it to work.
The good tests use black box principle; i.e. they only use public APIs and do not rely on knowledge of inner working of a component. When the component changes, tests do not break. Tests with mocks rely on knowing how component work, which functions it calls; the tests with mocks become brittle, break often and require lot of effort to update when the code changes.
Avoid mocks as much as you can.
It's not necessary to have mocks for unit tests. They can be a useful tool, but they aren't required.
I am fine with having fake implementations and so forth, but the whole "when function X is called with Y arguments, return Z" thing is bad. It leads to very tight coupling of the test code with the implementation, and often means the tests are only testing against the engineer's understanding of what's happening - which is the same thing they coded against in the first place. I've seen GP's example of tests being green but the code not working correctly a number of times because of that.
Most compilers do not use "unit" tests per se. Much more common are integration tests targeted at a particular lowering phase or optimization pass.
This is pretty important since "unit tests" would be far too constraining for reasonable modifications to the compiler, e.g. adding a new pass could change the actual output code without modifying the semantics.
LLVM has "unit" tests
I mean they run single pass with some small llvm ir input and check if the output IR is fine
>Explain then what is your alternative to unit and integration tests.
Tests against real components instead of mocks.
>If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.
Nowadays external components can be very complex systems e.g dbs, messaging queues, 3rd APIs and so on
A lot of things can go wrong and you arent even aware of them in order to get mocks right.
Examples? fuckin emojis.
On mocked in memory database they work fine, but fail on real db due to encoding settings.
I'm not a fan of extensive mocking but you're conflating two rather different test cases. A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration. You should of course have tests towards the database too, but then you'd mock out parts of the application instead and not the database itself.
>A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration.
Wdym?
You're testing e.g simple crude operation, e.g create hn thread
With mocked db it passed, with real db it fails due to encoding issue.
The result is that tests are green, but app does not work.
We're talking about OO Java. You bring up shared libraries, list a bunch of things not unique to Java nor OO, then claim `etc.` benefits.
You really haven't argued anything, so ending on a "you must be personally blind jab" just looks dumb.
It's because the concepts are the same, but people get enraged by the words. What Java calls a factory would be a "plugin loader" in C++. It's the same concept. And most big C++ codebases end up inventing something similar. Windows heavily uses COM which is full of interfaces and factories, but it isn't anything to do with Java.
Java I think gets attacked this way because a lot of developers, especially in the early 2000s, were entering the industry only familiar with scripting languages they'd used for personal hobby projects, and then Java was the first time they encountered languages and projects that involved hundreds of developers. Scripting codebases didn't define interfaces or types for anything even though that limits your project scalability, unit testing was often kinda just missing or very superficial, and there was an ambient assumption that all dependencies are open source and last forever whilst the apps themselves are throwaway.
The Java ecosystem quickly evolved into the enterprise server space and came to make very different assumptions, like:
• Projects last a long time, may churn through thousands of developers over their lifetimes and are used in big mission critical use cases.
• Therefore it's better to impose some rules up front and benefit from the discipline later.
• Dependencies are rare things that create supplier risks, you purchase them at least some of the time, they exist in a competitive market, and they can be transient, e.g. your MQ vendor may go under or be outcompeted by a better one. In turn that means standardized interfaces are useful.
So the Java community focused on standardizing interfaces to big chunky dependencies like relational databases, message queuing engines, app servers and ORMs, whereas the scripting language communities just said YOLO and anyway why would you ever want more than MySQL?
Very different sets of assumptions lead to different styles of coding. And yes it means Java can seem more abstract. You don't send queries to a PostgreSQL or MySQL object, you send it to an abstract Connection which represents standardized functionality, then if you want to use DB specific features you can unwrap it to a vendor specific interface. It makes things easier to port.
I am currently being radicalised against OOP because of one specific senior in my team that uses it relentlessly, no matter the problem domain. I recognise there are problems where OOP is a good abstraction, but there are so many places where it isn't.
I suspect many OOP haters have experienced what I'm currently experiencing, stateful objects for handing calculations that should be stateless, a confusing bag of methods that are sometimes hidden behind getters so you can't even easily tell where the computation is happening, etc
You could write crappy code in any language. I don't think it's specific for Java. Overall I think java is pretty good, especially for big code bases.
But there's a real difference how easy it is to write crappy code in a language. In regards to java that'd be, for example, nullability, or mutability. Kotlin, in comparison, makes those explicit and eliminates some pain points. You'd have to go out of your way and make your code actively worse for it to be on the same level as the same java code.
And then there's a reason they're teaching the "functional core, imperative shell" pattern.
On the other hand, Java's tooling for correctly refactoring at scale is pretty impressive: using IntelliJ, it's pretty tractable to unwind quite a few messes using automatic tools in a way that's hard to match in many languages that are often considered better.
I agree with your point, and I want to second C# and JetBrains Rider here. Whatever refactoring you can with Java in JetBrains IntelliJ, you can do the same with C#/Rider. I have worked on multiple code bases in my career that were 100sK lines of Java and/or C#. Having a great IDE experience was simply a miracle.
The language Kotlin is actually developed by JetBrains
I've found that IntelliJ's refactorings don't work as well for Kotlin as Java but, also, I've avoided Kotlin because I don't like it very much.
You gotta admit, though, that a language which strongarms you into writing classes with hidden state and then extending and composing them endlessly is kinda pushing you in that direction.
It’s certainly possible to write good code in Java but it does still lend itself to abuse by the kind of person that treated Design Patterns as a Bible.
>kind of person that treated Design Patterns as a Bible
I have a vague idea of what the Bible says, but I have my favorite parts that I sometimes get loud about. Specifically, please think really hard before making a Singleton, and then don't do it.
Singletons are so useful in single threaded node land. Configuration objects, DB connection objects that have connection pooling behind them, even my LLM connection is accessed via a Singleton.
OK yeah that's a pretty good general principle. You think you only need one of these? Are you absolutely certain? You SURE? Wrong, you now need two. Or three.
A singleton is more than just, "I only need one of these," it is more of a pattern of "I need there to be only one of these," which is subtly different and much more annoying.
Separation of data and algorithm is so useful. I can't really comment on how your senior is doing it, but in the area of numeric calculations, making numbers know anything about their calcs is a Bad Idea. Even associations with their units or other metadata should be loose. Functional programming provides such a useful intellectual toolkit even if you program in Java.
Sorry to learn, hope you don't get scar tissue from it.
Not sure how many people are writing programs with lots of numeric calculations.
Most programs in my experience are about manipulating records: retrieve something from a database, manipulate it a bit (change values), update it back.
Over here OOP do a good job - you create the data structures that you need to manipulate, but create the exact interface to effect the changes in a way that respect the domain rules.
I do get that this isn't every domain out there and _no size fits all_, but I don't get the OP complaints.
I currently think that most of the anger about OOP is either related to bad practices (overusing) or to lack of knowledge from newcomers. OOP is a tool like any other and can be used wrong.
Creating good reusable abstractions is not easy. It's quite possible to create tarballs of unusuable or overwrought abstractions. That is less of a knock on OOP and more a knock on the developers.
But that is what classes does, it lets you have data lists and dictionaries implemented as a class so that your algorithm doesn't have to understand how the data structure is implemented. In functional programming the algorithm has to be aware of the data structure, I feel that is much worse.
> I recognise there are problems where OOP is a good abstraction, but there are so many places where it isn't.
Exactly. This is the way to think about it, imo. One of those places is GUI frameworks, I think, and there I am fine doing OOP, because I don't have a better idea how to get things done, and most GUI frameworks/toolkits/whatever are designed in an OOP way anyway. Other places I just try to go functional.
I agree. Neither OOP nor functional programming should be treated as a religion or as a paradigm that one must either be fully invested in or not.
OOP is a collection of ideas about how to write code. We should use those ideas when they are useful and ignore them when they are not.
But many people don't want to put in the critical thinking required to do that, so instead they hide behind the shield of "SOLIDD principles" and "best practice" to justify their bad code (not knocking on SOLIDD principles, it's just that people use it to justify making things object oriented when they shouldn't be).
I think the OO hatred comes from how academia and certain enterprise organisations for our industry picked it up and taught it like a religion. Molding an entire generation og developers who wrote some really horrible code because they were taught that abstractions were, always, correct. It obviously weren't so outside those institutions, the world slowly realized that abstractions were in many ways worse for cyclomatic complexity than what came before. Maybe not in a perfect world where people don't write shitty code on a thursday afternoon after a long day of horrible meetings in a long week of having a baby cry every night.
As with everything, there isn't a golden rule to follow. Sometimes OO makes sense, sometimes it doesn't. I rarely use it, or abstractions in general, but there are some things where it's just the right fit.
Much like Agile, or Hungarian notation. When a general principle becomes a religion it ceases to be a good general principle.
> I think the OO hatred comes from how academia and certain enterprise organisations for our industry picked it up and taught it like a religion.
This, this, this. So much this.
Back when I was in uni, Sun had donated basically an entire lab of those computers terminals that you used to sign in to with a smart card (I forgot the name). In exchange, the uni agreed to teach all classes related to programming in Java, and to have the professors certify in Java (never mind the fact that nobody ever used that laboratory because the lab techs had no idea how to work with those terminals).
As a result of this, every class from algorithms, to software architecture felt like like a Java cult indoctrination. One of the professors actually said C was dead because Java was clearly superior.
> One of the professors actually said C was dead because Java was clearly superior.
In our uni (around 1998/99) all professors said that except the Haskell teacher who indeed called Java a mistake (but c also).
Turns out everyone was completely wrong except for that one guy working in Haskell.
Tale as old as time.
Java was probably close to 50% of the job market at some point in the 2000s and C significantly dried up with C++ taking its place. So I'm afraid everyone was right actually.
To be honest, I'm convinced the reason so many people dislike Java is because they have had to use it in a professional context only. It's not really a hobbyist language.
Just for the record, I don't think C ever dried up in the embedded space. And the embedded space is waaaay bigger than most people realise, because almost all of it is proprietary, so very little "leaks" onto the public interwebs.
Believe it or not but there is plenty of Java and C++ in the embedded space. It’s far from being a C fortress.
Probably the Sun Ray computer.
https://en.wikipedia.org/wiki/Sun_Ray
This was it!
And now you know how Nvidia CUDA got so popular.
Tried to modify one boolean in a codebase a few weeks ago and I had to go thru like 12 levels of indirection to find "the code that actually runs".
tourist2d seems to have triggered some moderation trap, but wrote:
> Sounds like a problem with poor code rather than something unique to OOP.
And yeah, OO may lean a bit towards more indirection, but it definitely doesn't force you to write code like that. If you go through too many levels, that's entirely on the developer.
Sounds like a problem with poor code rather than something unique to OOP.
They use higher order types to implement subtyping as a library, with combinators. For example, you can take your fudget that does not (fully) implement some functionality, wrap it into another one that does (or knows how to) implement it and have a combined fudget that fully implements what you need. Much like parsing combinators.
(Hi Andrew)
It's the misuse of OO constructs that gives it a bad name, almost always that is inheritance being overused/misused. Encapsulation and modularity are important for larger code bases, and polymorphism is useful for making code simpler, smaller and more understandable.
Maybe the extra long names in java also don't help too, along with the overuse/forced use of patterns? At least it's not Hungarian notation.
Heck, I love the long names. I know, I also hate FooBarSpecializedFactory, but that's waaaay better than FBSpecFac.
A sample: pandas loc, iloc etc. Or Haskell scanl1. Or Scheme's cdr and car. (I know - most of the latest examples are common functions that you'll learn after a while, but still, reading it at first is terrible).
My first contact with a modern OO language was C# after years of C++. And I remember how I thought it awkward that the codebase looked like everything was spelled out. Until I realize that it is easier to read, and that's the main quality for a codebase.
Objective-C says hello in extra long names are concerned.
> CMMetadataFormatDescriptionCreateWithMetadataFormatDescriptionAndMetadataSpecifications(allocator:sourceDescription:metadataSpecifications:formatDescriptionOut:)
https://developer.apple.com/documentation/coremedia/cmmetada...:)
Jason! Couldn't agree more.
OOP is just not how computers work.
Computers work on data. Every single software problem is a data problem. Learning to think about problems in a data oriented way will make you a better developer and will make many difficult problems easier to think about and to write software to solve.
In addition to that, data oriented software almost inherently runs faster because it uses the cache more efficiently.
The objects that fall out of data oriented development represent what is actually going on inside the application instead of how an observer would model it naively.
I really like data oriented development and I wish I had examples I could show, but they are all $employer’s.
As a reverse engineer, I totally get the phrase.
Even with non-obfuscated code, if you're working with a decompilation you don't get any of the accompanying code comments or documentation. The more abstractions are present, the harder it is to understand what's going on. And, the harder it is to figure out what code changes are needed to implement your desired feature.
C++ vtables are especially annoying. You can see the dispatch, but it's really hard to find the corresponding implementation from static analysis alone. If I had to choose between "no variable names" and "no vtables", I'd pick the latter.
Vtables can be annoying to follow through, but try reverse-engineering an Objective-C binary! Everything is dispatched dynamically, so 99% of the call graph ends in objc_msgSend(). Good luck figuring out what the message is, and the class of the object receiving it.
Isn't that easy? The message is a string in one of the register parameters to it.
> Everything is dispatched dynamically
Well, not everything, there is NS_DIRECT. The reason for that being that dynamic dispatch is expensive - you have to keep a lot of metadata about it in the heap for sometimes rarely-used messages. (It's not about CPU usage.)
It’s all about the data model and the architecture.
I think people focus a lot on inheritance but the core idea of OO is more the grouping of values and functions. Conceptually, you think about how methods transforms the data you are manipulating and that’s a useful way to think about programs.
This complexity doesn’t really disappear when you leave OO language actually. The way most complex Ocaml programs are structured with modules grouping one main type and the functions working on it is in a lot of way inspired by OO.
> grouping of values and functions
Encapsulation.
Which I think is misunderstood a lot, both by practitioners and critics.
You're right, it is all about the problem domain. Unfortunately, there was a solid decade where that was not the typical advice, and OO was pushed (in industry and in education) as the last word in programming, suitable for all tasks. There's a generation out there who was taught programming as "instantiate a truck object that inherits from a car object" and another generation who was required to implement math using OOP principles instead of just doing math. Programming languages that did not have object models suddenly developed them, often incompatibly with the rest of the language. So, while I think that OO has its places, I understand why there's a lot of visceral response to it online.
From my pov, both inheritance and encapsulation aren't great if you have to maintain code and add new one.
Also, I dislike design patterns overuse, DDD done Uncle Bob style.
Also we can think of where OOP drives many teams to:
https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...
https://factoryfactoryfactory.net/
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
> I can't imagine building something like a graphics framework without some subtyping.
While React technically uses some OOP, in practice it's a pretty non-OOP way do UI. Same with e.g. ImGUI (C++), Clay (C). I suppose for the React case there's still an OOP thing called the DOM underneath, but that's pretty abstracted.
In practice most of the useful parts of OOP can be done with a "bag/record of functions". (Though not all. OCaml has some interesting stuff wrt. the FP+OOP combo which hasn't been done elsewhere, but that may just be because it wasn't ultimately all that useful.)
React is a kind of strange dysfunctional OOP pretending not to be, to appeal to people like those on this thread ;)
Function calls have state, in React. Think about that for a second! It totally breaks the most basic parts of programming theory taught in day one of any coding class. The resulting concepts map pretty closely:
• React function -> instantiate or access a previously instantiated object.
• useState -> define an object field
• Code inside the function: constructor logic
• Return value: effectively a getResult() style method
The difference is that the underlying stateful objects implemented in OOP using inheritance (check out the blink code) is covered up with the vdom diffing. It's a very complicated and indirect way to do a bunch of method calls on stateful objects.
The React model doesn't work for a lot of things. I just Googled [react editor component] and the first hit is https://primereact.org/editor/ which appears to be an ultra-thin wrapper around a library called Quill. Quill isn't a React component, it's a completely conventional OOP library. That's because modelling a rich text editor as a React component would be weird and awkward. The data structures used for the model aren't ideal for direct modification or exposure. You really need the encapsulation provided by objects with properties and methods.
React is most likely not what the author had in mind by a graphics framework. The browser implementation of the DOM or a desktop widget system is much more likely the idea.
While using a OOP language.
Yeah, I agree with you, and actually like OOP where it's appropriate.
Unfortunately there were so many bad examples from the old Java "every thing needs a dozen factories and thousands of interfaces" days that most people haven't seen the cases where it works well.
If everyone does it wrong, then that alone means it itself is wrong.
Everyone? Really, that's your take? Most code out there is OOP and I find it hard to believe that everything is wrong.
Most food out there is McDonalds.
Indeed. "The purpose of a system is what it does."
I find inheritance works best when you model things that don't exist in reality, but only as software concepts, for example, an AbstractList, Buffer or GUI component.
Really like this concept!
That's why we use depedency injection now~~!
I've always wanted my editor's go-to functionality to take me to an abstract class instead of the place where the actual logic resides. Good times.
Any modern IDE will let you immediately bring up the subclasses with a single hotkey. If you have an abstract class with only a single subclass and that's not because new code is going to be added soon then yes, it's a bad design decision. Fortunately, also easy to fix with good IDEs.
In my last project every class had a corresponding abstract class, and then we used DI to use the real class. Good to be rid of it.
It's all about the problem domain imo. I can't imagine building something like a graphics framework without some subtyping.
The keyword being "some".
Yes, there are those who can use OOP responsibly, but in my (fortunately short) experience with Enterprise Java, they are outnumbered by the cargo-cult dogma of architecture astronauts who advocate a "more is better" approach to abstraction and design patterns. That's how you end up with things like AbstractSingletonProxyFactoryBean.
[dead]