> Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.
Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.
Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
Perhaps you haven't had the opportunity to experience the advantages of using these techniques, or were you mindful of when you benefited from them. We tend to remember the bad parts and assume the good parts are a given. But personal tastes don't refute the value and usefulness of features you never learned to appreciate.
> > Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.
> Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.
> Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
I don't think GP was saying that Dynamically loaded objects are not needed, or that Interfaces are not needed.
I read it more as "Dynamically loaded interfaces that can be swapped out are not needed".
> Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.
The share of all software that actually benefits from this is extremely small. Most web-style software with stateless request/response is better architected for containers and rolling deployments. Most businesses are also completely fine with a few minutes of downtime here and there. For runtime-replacement to be valuable, you need both statefulness and high SLA (99.999+%) requirements.
To be fair, there is indeed a subset of software that is both stateful and with high SLA requirements, where these techniques are useful, so it's good to know about them for those rare cases. There is some pretty compelling software underneath those Java EE servers for the few use-cases that really need them.
But those use-cases are rare.
>You cannot have quality software without these basic testing techniques
Of course you can, wtf?
Mock are often the reason of tests being green and app not working :)
> Of course you can, wtf?
Explain then what is your alternative to unit and integration tests.
> Mock are often the reason of tests being green and app not working :)
I don't think that's a valid assumption. Tests just verify the system under test, and test doubles are there only to provide inputs in a way that isolates your system under test. If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.
Workman and it's tools.
I personally believe mocks are a bad practice which is caused by bad architecture. When different components are intertwined, and you cannot test them in isolation, the good solution is refactoring the code; the bad is using mocks.
For example of such intertwined architecture see Mutter, a window manager of Gnome Shell (a program that manages windows on Linux desktop). A code that handles key presses (accessibility features, shortcuts), needs objects like MetaDisplay or MetaSeat and cannot be tested in isolation; you figuratively need a half of wayland for it to work.
The good tests use black box principle; i.e. they only use public APIs and do not rely on knowledge of inner working of a component. When the component changes, tests do not break. Tests with mocks rely on knowing how component work, which functions it calls; the tests with mocks become brittle, break often and require lot of effort to update when the code changes.
Avoid mocks as much as you can.
It's not necessary to have mocks for unit tests. They can be a useful tool, but they aren't required.
I am fine with having fake implementations and so forth, but the whole "when function X is called with Y arguments, return Z" thing is bad. It leads to very tight coupling of the test code with the implementation, and often means the tests are only testing against the engineer's understanding of what's happening - which is the same thing they coded against in the first place. I've seen GP's example of tests being green but the code not working correctly a number of times because of that.
Most compilers do not use "unit" tests per se. Much more common are integration tests targeted at a particular lowering phase or optimization pass.
This is pretty important since "unit tests" would be far too constraining for reasonable modifications to the compiler, e.g. adding a new pass could change the actual output code without modifying the semantics.
LLVM has "unit" tests
I mean they run single pass with some small llvm ir input and check if the output IR is fine
>Explain then what is your alternative to unit and integration tests.
Tests against real components instead of mocks.
>If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.
Nowadays external components can be very complex systems e.g dbs, messaging queues, 3rd APIs and so on
A lot of things can go wrong and you arent even aware of them in order to get mocks right.
Examples? fuckin emojis.
On mocked in memory database they work fine, but fail on real db due to encoding settings.
I'm not a fan of extensive mocking but you're conflating two rather different test cases. A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration. You should of course have tests towards the database too, but then you'd mock out parts of the application instead and not the database itself.
>A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration.
Wdym?
You're testing e.g simple crude operation, e.g create hn thread
With mocked db it passed, with real db it fails due to encoding issue.
The result is that tests are green, but app does not work.
We're talking about OO Java. You bring up shared libraries, list a bunch of things not unique to Java nor OO, then claim `etc.` benefits.
You really haven't argued anything, so ending on a "you must be personally blind jab" just looks dumb.
It's because the concepts are the same, but people get enraged by the words. What Java calls a factory would be a "plugin loader" in C++. It's the same concept. And most big C++ codebases end up inventing something similar. Windows heavily uses COM which is full of interfaces and factories, but it isn't anything to do with Java.
Java I think gets attacked this way because a lot of developers, especially in the early 2000s, were entering the industry only familiar with scripting languages they'd used for personal hobby projects, and then Java was the first time they encountered languages and projects that involved hundreds of developers. Scripting codebases didn't define interfaces or types for anything even though that limits your project scalability, unit testing was often kinda just missing or very superficial, and there was an ambient assumption that all dependencies are open source and last forever whilst the apps themselves are throwaway.
The Java ecosystem quickly evolved into the enterprise server space and came to make very different assumptions, like:
• Projects last a long time, may churn through thousands of developers over their lifetimes and are used in big mission critical use cases.
• Therefore it's better to impose some rules up front and benefit from the discipline later.
• Dependencies are rare things that create supplier risks, you purchase them at least some of the time, they exist in a competitive market, and they can be transient, e.g. your MQ vendor may go under or be outcompeted by a better one. In turn that means standardized interfaces are useful.
So the Java community focused on standardizing interfaces to big chunky dependencies like relational databases, message queuing engines, app servers and ORMs, whereas the scripting language communities just said YOLO and anyway why would you ever want more than MySQL?
Very different sets of assumptions lead to different styles of coding. And yes it means Java can seem more abstract. You don't send queries to a PostgreSQL or MySQL object, you send it to an abstract Connection which represents standardized functionality, then if you want to use DB specific features you can unwrap it to a vendor specific interface. It makes things easier to port.