For me, it's the fact that the mess of DAOs and Factories that constituted "enterprise" Java in the 00s was a special kind of hellscape that was actively encouraged by the design of the language.

Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.

It was terrible and taught me to avoid applying for jobs that used Java.

I like OOP and often use it. But mostly just as an encapsulation of functionality, and I never use interfaces or the like.

As someone coding since 1986 it is always kind of interesting how Java gets the hate for something that it never started, and was already common in the industry even before Oak became an idea.

To the point that there are people that will assert the GoF book, published before Java was invented, actually contains Java in it.

People did it, some times, when they needed it.

It was so rare that the GoF though they needed to write a book to teach people how to use those patterns when they eventually find them.

But after the book was published, those patterns became "advanced programming that is worth testing for in job interviews", and people started to code for their CVs. The same happened briefly with refactoring, and for much longer with unit tests and the other XP activities (like TDD).

At the same time, Java's popularity was exploding on enterprise software.

It was, but still the book did not magically changed from Smalltalk and C++ into Java.

It is probably because Java took this design philosophy (or I should say dogma) to heart as its very syntax and structure encourages to write code like that. One example: It does not have proper modules. Modules, the one thing that most people can agree upon that are a good thing, enabling modularity, literally. Another one: You cannot have simply a function in a module. Shit needs to be inside classes or mixed up with other unrelated concepts. Java the language encourages this kind of madness.

It is called packages. There is nothing on the modules as programming concept that requires the existence of functions as entity.

Again, Smalltalk did it first, and is actually one of the languages on the famous GoF book, used to create all the OOP patterns people complain about, the other being C++.

> There is nothing on the modules as programming concept that requires the existence of functions as entity.

I didn't claim it does. To make the point though: bare functions are a much simpler building block, and a much cleaner building block than classes. Classes by their nature put state and behavior in one place. If one doesn't need that, then a class is actually not the right concept to go for (assuming one has the choice, which one doesn't in Java). A few constants and a bunch of functions would be a simpler and fully sufficient concept in that case. And how does one group those? Well, a module.

In Java you are basically forced to make unnecessary classes, that only have static functions as members, to achieve a similar simplicity, but then you still got that ugly class thing thrown in unnecessarily.

In a few other languages maybe things are based on different things than functions. Like words in Forth or something. But even they can be interpreted to be functions, with a few implicit arguments. And you can just write them down. No need to put them into some class or some blabliblub concept.

From type systems theory point of view, a class is an extensible module that can be used as a variable.

As mentioned in another reply, Java did not invent this, it was building upon Smalltalk and SELF, with a little bit of Objective-C on the side, and C++ like syntax.

Try to create a single function in Smalltalk, or SELF.

http://stephane.ducasse.free.fr/FreeBooks.html

https://www.strongtalk.org/

https://selflanguage.org/

It is also no accident that when Java came into the scene, some big Smalltalk names like IBM, one day of the other easily migrated their Smalltalk tooling into Java, and to this day Eclipse still has the same object browser as any Smalltalk environment.

Smalltalk,

https://www.researchgate.net/figure/The-Smalltalk-browser-sh...

Which you will find a certain similarity including with NeXTSTEP navigation tools, and eventually OS X Finder,

The code browser in Eclipse

https://i.sstatic.net/4OFEM.png

By the way, in OOP languages like Python, even functions are objects,

    Python 3.14.0 (tags/v3.14.0:ebf955d, Oct  7 2025, 10:15:03) [MSC v.1944 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> def sum(x, y): return x + y
    ...
    >>> sum
    <function sum at 0x0000017A9778D4E0>
    >>> dir(sum)
    ['__annotate__', '__annotations__', '__builtins__', '__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__getstate__', '__globals__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__kwdefaults__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__type_params__']
    >>> type(sum)
    <class 'function'>
    >>> sum.__name__
    'sum'
    >>> sum.__class__
    <class 'function'>

> The code browser in Eclipse

> https://i.sstatic.net/4OFEM.png

«

Error 1011 Ray ID: 9973d6cc1badc66a • 2025-10-31 14:28:28 UTC

Access denied

What happened?

The owner of this website (i.sstatic.net) does not allow hotlinking to that resource (/4OFEM.png).

»

In java, it has to be a class in a package. Packages are sane enough. That isnt the point.

That is the point, packages the Java programming language feature for the CS concept of modules.

https://en.wikipedia.org/wiki/Modular_programming

> Languages that formally support the module concept include Ada, ALGOL, BlitzMax, C++, C#, Clojure, COBOL, Common Lisp, D, Dart, eC, Erlang, Elixir, Elm, F, F#, Fortran, Go, Haskell, IBM/360 Assembler, IBM System/38 and AS/400 Control Language (CL), IBM RPG, Java, Julia, MATLAB, ML, Modula, Modula-2, Modula-3, Morpho, NEWP, Oberon, Oberon-2, Objective-C, OCaml, several Pascal derivatives (Component Pascal, Object Pascal, Turbo Pascal, UCSD Pascal), Perl, PHP, PL/I, PureBasic, Python, R, Ruby,[4] Rust, JavaScript,[5] Visual Basic (.NET) and WebDNA.

If the whole complaint is that you cannot have a bare bones function outside of a class, Java is not alone.

Predating Java by several decades, Smalltalk, StrongTalk, SELF, Eiffel, Sather, BETA.

And naturally lets not forget C#, that came after Java.

Thankfully those days are not with us any more. Java has moved on quite considerably in the last few years.

I think people are still too ready to use massive, hulking frameworks for every little thing, of course, but the worst of the 'enterprise' stuff seems to have been banished.

I hope you are right. I really do. But I have a hunch, that if I accepted any Java job, I would simply have coworkers, who are still stuck with "enterprise" Java ideology, and whose word has more weight than the word of a newcomer. That's one of the fears, that stops me from seriously considering Java shops. Fear of unreasonable coworkers and then being forced to deliver shitty work, that meets their idea of how the code should be written in the most enterprise way they can come up with.

Always makes me think of that AbstractProxyFactorySomething or similar, that I saw in Keycloak, for when you want to implement your own password quality criteria. When you step back a bit and think about what you actually want to have, you realize, that actually all you want is a function, that takes as input a string, and gives as output a boolean, depending on whether the password is strong enough, or fulfills all criteria. Maybe you want to output a list of unmet criteria, if you want to make it complex. But no, it's AbstractProxyFactorySomething.

I don't understand these complaints.

Here is a tiny interface that will do what you need:

    @FunctionalInterface
    public interface IPasswordChecker
    {
        bool isValid(String password);
    }
Now you can trivially declare a lambda that implements the interface.

Example:

    const IPasswordChecker passwordChecker = (String password) -> password.length() >= 16;

I'm personally rather fond of Java, but even this (or the shorter `Predicate`) still can't compete with the straightforward simplicity of a type along the lines of `string -> bool`.

[deleted]

    > that was actively encouraged by the design of the language.
Java hasn't changed that much since the "hellscape" 00s. Is it better now? Or what is specific to the language the encourages "the mess of DAOs and Factories"? You can make all of those same mistakes in Python, C# or C++. I have used Java for about 15 years now and I have never written any of that junky enterprise crap with a million layers of OO.

    > I never use interfaces or the like.
This is the first that I heard any disdain towards interfaces. What is there not to like?

It insists upon itself. That’s really the problem with Java’s design philosophy from that era; it encourages ceremony. Even if you don’t write the full-on "Enterprise™" soup of DAOs, Factories, and ServiceLocators, the language’s type system and conventions gently nudge you toward abstraction layers you don’t actually need.

Interfaces for everything, abstract classes “just in case,” dependency injection frameworks that exist mainly to manage all the interfaces. Java (and often Enterprise C#) is all scaffolding built to appease the compiler and the ideology of “extensibility” before there’s any actual complexity to extend.

You can write clean, functional, concise Java today, especially with records, pattern matching, and lambdas, but the culture around the language was forged in a time when verbosity was king.

I think the best description of this kind of "obfuscation" that especially afflicted Java still is Steve Yegge's "Kingdom of Nouns" rant:

https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...

It's very useful in C++, funnily enough. This is because I can have a non-templated interface base class, then a templated impl class.

Then my templated impl header can be very heavy without killing my build times since only the interface base class is #included.

Not sure if this is as common in Java.

Using more complex architecture (which requires more human time to understand) to merely make build time shorter is a ridiculous choice.

For a large project this could save hours of developer time.

C++ is a hell of a language.

just buy your devs faster computers to compile on

You can't buy your way out of this, because C++ builds are only parallelizable across multiple translation units[1] (i.e. separate .cpp files). Unless you're willing to build a better single-core CPU, there's not much you can do.

The challenge with modern C++ projects is that every individual TU can take forever to build because it involves parsing massive header files. Oftentimes you can make this faster with "unity builds" that combine multiple C++ files into a single TU since the individual .cpp file's build time is negligible compared to your chonky headers.

The reason the header files are so massive is because using a templated entity (function or class) requires seeing the ENTIRE DEFINITION at the point of use, because otherwise the compiler doesn't know if the substitution will be successful. You can't forward declare a templated entity like you would with normal code.[2]

If you want to avoid including these definitions, you create an abstract interface and inherit from that in your templated implementations, then pass the abstract interface around.

[1] or linking with mold

[2] There used to be a feature that allowed forward declaring templated entities called "export". A single compiler tried to implement it and it was such a failure it was removed from the language. https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14...

Java uses type erasure which are very cheap in compile time but you cannot do things like

   t = new T(); // T is a template parameter class
C++ uses reified generics which are heavy on compile time but allows the above.

That's why they're called generic parameters, not template parameters; the code is generic over all possible parameters, not templated for every possible parameter.

    > C++ uses reified generics
I was a C++ programmer for many years, but I never heard this claim. I asked Google AI and it disagees.

    > does c++ have reified generics?

    > C++ templates do not provide reified generics in the same sense as languages like C# or Java (to a limited extent). Reified generics mean that the type information of generic parameters is available and accessible at runtime.

Interesting I’d never picked up on this pedantic subtlety. I too thought reified meant what you could do at the call site not what you could do at runtime. Was my understanding wrong, or is Gemini hallucinating.

In any event, you have to use weird (I think “unsafe”) reflection tricks to get the type info back at runtime in Java. To the point where it makes you think it’s not supported by the language design but rather a clever accident that someone figured out how to abuse.

Funny, given that interfaces are the good part (especially compared to inheritance).

I didn't mean to imply that interfaces are bad or useless. Just that I don't use them. Probably because I write most of my stuff in Python anymore.

"But but... I can swap out my entire my persistance layer since it's all just an interface!"

Has anyone ever actually done this ?

I have used something similar with effects in Haskell to mock "the real world" for running tests.

But if it was as convoluted to use as it's in Java, I wouldn't. And also, it's not enterprise CRUD. Enterprise CRUD resists complex architectures like nothing else.

> Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.

Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.

Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.

Perhaps you haven't had the opportunity to experience the advantages of using these techniques, or were you mindful of when you benefited from them. We tend to remember the bad parts and assume the good parts are a given. But personal tastes don't refute the value and usefulness of features you never learned to appreciate.

> > Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.

> Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.

> Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.

I don't think GP was saying that Dynamically loaded objects are not needed, or that Interfaces are not needed.

I read it more as "Dynamically loaded interfaces that can be swapped out are not needed".

> Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.

The share of all software that actually benefits from this is extremely small. Most web-style software with stateless request/response is better architected for containers and rolling deployments. Most businesses are also completely fine with a few minutes of downtime here and there. For runtime-replacement to be valuable, you need both statefulness and high SLA (99.999+%) requirements.

To be fair, there is indeed a subset of software that is both stateful and with high SLA requirements, where these techniques are useful, so it's good to know about them for those rare cases. There is some pretty compelling software underneath those Java EE servers for the few use-cases that really need them.

But those use-cases are rare.

>You cannot have quality software without these basic testing techniques

Of course you can, wtf?

Mock are often the reason of tests being green and app not working :)

> Of course you can, wtf?

Explain then what is your alternative to unit and integration tests.

> Mock are often the reason of tests being green and app not working :)

I don't think that's a valid assumption. Tests just verify the system under test, and test doubles are there only to provide inputs in a way that isolates your system under test. If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.

Workman and it's tools.

I personally believe mocks are a bad practice which is caused by bad architecture. When different components are intertwined, and you cannot test them in isolation, the good solution is refactoring the code; the bad is using mocks.

For example of such intertwined architecture see Mutter, a window manager of Gnome Shell (a program that manages windows on Linux desktop). A code that handles key presses (accessibility features, shortcuts), needs objects like MetaDisplay or MetaSeat and cannot be tested in isolation; you figuratively need a half of wayland for it to work.

The good tests use black box principle; i.e. they only use public APIs and do not rely on knowledge of inner working of a component. When the component changes, tests do not break. Tests with mocks rely on knowing how component work, which functions it calls; the tests with mocks become brittle, break often and require lot of effort to update when the code changes.

Avoid mocks as much as you can.

It's not necessary to have mocks for unit tests. They can be a useful tool, but they aren't required.

I am fine with having fake implementations and so forth, but the whole "when function X is called with Y arguments, return Z" thing is bad. It leads to very tight coupling of the test code with the implementation, and often means the tests are only testing against the engineer's understanding of what's happening - which is the same thing they coded against in the first place. I've seen GP's example of tests being green but the code not working correctly a number of times because of that.

Most compilers do not use "unit" tests per se. Much more common are integration tests targeted at a particular lowering phase or optimization pass.

This is pretty important since "unit tests" would be far too constraining for reasonable modifications to the compiler, e.g. adding a new pass could change the actual output code without modifying the semantics.

LLVM has "unit" tests

I mean they run single pass with some small llvm ir input and check if the output IR is fine

>Explain then what is your alternative to unit and integration tests.

Tests against real components instead of mocks.

>If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.

Nowadays external components can be very complex systems e.g dbs, messaging queues, 3rd APIs and so on

A lot of things can go wrong and you arent even aware of them in order to get mocks right.

Examples? fuckin emojis.

On mocked in memory database they work fine, but fail on real db due to encoding settings.

I'm not a fan of extensive mocking but you're conflating two rather different test cases. A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration. You should of course have tests towards the database too, but then you'd mock out parts of the application instead and not the database itself.

>A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration.

Wdym?

You're testing e.g simple crude operation, e.g create hn thread

With mocked db it passed, with real db it fails due to encoding issue.

The result is that tests are green, but app does not work.

We're talking about OO Java. You bring up shared libraries, list a bunch of things not unique to Java nor OO, then claim `etc.` benefits.

You really haven't argued anything, so ending on a "you must be personally blind jab" just looks dumb.

It's because the concepts are the same, but people get enraged by the words. What Java calls a factory would be a "plugin loader" in C++. It's the same concept. And most big C++ codebases end up inventing something similar. Windows heavily uses COM which is full of interfaces and factories, but it isn't anything to do with Java.

Java I think gets attacked this way because a lot of developers, especially in the early 2000s, were entering the industry only familiar with scripting languages they'd used for personal hobby projects, and then Java was the first time they encountered languages and projects that involved hundreds of developers. Scripting codebases didn't define interfaces or types for anything even though that limits your project scalability, unit testing was often kinda just missing or very superficial, and there was an ambient assumption that all dependencies are open source and last forever whilst the apps themselves are throwaway.

The Java ecosystem quickly evolved into the enterprise server space and came to make very different assumptions, like:

• Projects last a long time, may churn through thousands of developers over their lifetimes and are used in big mission critical use cases.

• Therefore it's better to impose some rules up front and benefit from the discipline later.

• Dependencies are rare things that create supplier risks, you purchase them at least some of the time, they exist in a competitive market, and they can be transient, e.g. your MQ vendor may go under or be outcompeted by a better one. In turn that means standardized interfaces are useful.

So the Java community focused on standardizing interfaces to big chunky dependencies like relational databases, message queuing engines, app servers and ORMs, whereas the scripting language communities just said YOLO and anyway why would you ever want more than MySQL?

Very different sets of assumptions lead to different styles of coding. And yes it means Java can seem more abstract. You don't send queries to a PostgreSQL or MySQL object, you send it to an abstract Connection which represents standardized functionality, then if you want to use DB specific features you can unwrap it to a vendor specific interface. It makes things easier to port.