It's extraordinary to me that Minecraft is both the game that has the most robust mod community out there and that the modders were working from obfuscated, decompiled Java binaries. With elaborate tooling to deobfuscate and then reobfuscate using the same mangled names. For over a decade! What dedication.

More proof that you don't need the source code to modify software. Then again, Java has always been easy to decompile, and IMHO the biggest obstacle to understanding is the "object-oriented obfuscation" that's inherent in large codebases even when you have the original source.

First time I have heard of object-oriented obfuscation.

I get it, but in general I don't get the OO hate.

It's all about the problem domain imo. I can't imagine building something like a graphics framework without some subtyping.

Unfortunately, people often use crap examples for OO. The worst is probably employee, where employee and contractor are subtypes of worker, or some other chicanery like that.

Of course in the real world a person can be both employee and contractor at the same time, can flit between those roles and many others, can temporarily park a role (e.g sabbatical) and many other permutations, all while maintaining history and even allowing for corrections of said history.

It would be hard to find any domain less suited to OO that HR records. I think these terrible examples are a primary reason for some people believing that OO is useless or worse than useless.

For me, it's the fact that the mess of DAOs and Factories that constituted "enterprise" Java in the 00s was a special kind of hellscape that was actively encouraged by the design of the language.

Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.

It was terrible and taught me to avoid applying for jobs that used Java.

I like OOP and often use it. But mostly just as an encapsulation of functionality, and I never use interfaces or the like.

As someone coding since 1986 it is always kind of interesting how Java gets the hate for something that it never started, and was already common in the industry even before Oak became an idea.

To the point that there are people that will assert the GoF book, published before Java was invented, actually contains Java in it.

People did it, some times, when they needed it.

It was so rare that the GoF though they needed to write a book to teach people how to use those patterns when they eventually find them.

But after the book was published, those patterns became "advanced programming that is worth testing for in job interviews", and people started to code for their CVs. The same happened briefly with refactoring, and for much longer with unit tests and the other XP activities (like TDD).

At the same time, Java's popularity was exploding on enterprise software.

It was, but still the book did not magically changed from Smalltalk and C++ into Java.

It is probably because Java took this design philosophy (or I should say dogma) to heart as its very syntax and structure encourages to write code like that. One example: It does not have proper modules. Modules, the one thing that most people can agree upon that are a good thing, enabling modularity, literally. Another one: You cannot have simply a function in a module. Shit needs to be inside classes or mixed up with other unrelated concepts. Java the language encourages this kind of madness.

It is called packages. There is nothing on the modules as programming concept that requires the existence of functions as entity.

Again, Smalltalk did it first, and is actually one of the languages on the famous GoF book, used to create all the OOP patterns people complain about, the other being C++.

> There is nothing on the modules as programming concept that requires the existence of functions as entity.

I didn't claim it does. To make the point though: bare functions are a much simpler building block, and a much cleaner building block than classes. Classes by their nature put state and behavior in one place. If one doesn't need that, then a class is actually not the right concept to go for (assuming one has the choice, which one doesn't in Java). A few constants and a bunch of functions would be a simpler and fully sufficient concept in that case. And how does one group those? Well, a module.

In Java you are basically forced to make unnecessary classes, that only have static functions as members, to achieve a similar simplicity, but then you still got that ugly class thing thrown in unnecessarily.

In a few other languages maybe things are based on different things than functions. Like words in Forth or something. But even they can be interpreted to be functions, with a few implicit arguments. And you can just write them down. No need to put them into some class or some blabliblub concept.

From type systems theory point of view, a class is an extensible module that can be used as a variable.

As mentioned in another reply, Java did not invent this, it was building upon Smalltalk and SELF, with a little bit of Objective-C on the side, and C++ like syntax.

Try to create a single function in Smalltalk, or SELF.

http://stephane.ducasse.free.fr/FreeBooks.html

https://www.strongtalk.org/

https://selflanguage.org/

It is also no accident that when Java came into the scene, some big Smalltalk names like IBM, one day of the other easily migrated their Smalltalk tooling into Java, and to this day Eclipse still has the same object browser as any Smalltalk environment.

Smalltalk,

https://www.researchgate.net/figure/The-Smalltalk-browser-sh...

Which you will find a certain similarity including with NeXTSTEP navigation tools, and eventually OS X Finder,

The code browser in Eclipse

https://i.sstatic.net/4OFEM.png

By the way, in OOP languages like Python, even functions are objects,

    Python 3.14.0 (tags/v3.14.0:ebf955d, Oct  7 2025, 10:15:03) [MSC v.1944 64 bit (AMD64)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> def sum(x, y): return x + y
    ...
    >>> sum
    <function sum at 0x0000017A9778D4E0>
    >>> dir(sum)
    ['__annotate__', '__annotations__', '__builtins__', '__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__getstate__', '__globals__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__kwdefaults__', '__le__', '__lt__', '__module__', '__name__', '__ne__', '__new__', '__qualname__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__type_params__']
    >>> type(sum)
    <class 'function'>
    >>> sum.__name__
    'sum'
    >>> sum.__class__
    <class 'function'>

> The code browser in Eclipse

> https://i.sstatic.net/4OFEM.png

«

Error 1011 Ray ID: 9973d6cc1badc66a • 2025-10-31 14:28:28 UTC

Access denied

What happened?

The owner of this website (i.sstatic.net) does not allow hotlinking to that resource (/4OFEM.png).

»

In java, it has to be a class in a package. Packages are sane enough. That isnt the point.

That is the point, packages the Java programming language feature for the CS concept of modules.

https://en.wikipedia.org/wiki/Modular_programming

> Languages that formally support the module concept include Ada, ALGOL, BlitzMax, C++, C#, Clojure, COBOL, Common Lisp, D, Dart, eC, Erlang, Elixir, Elm, F, F#, Fortran, Go, Haskell, IBM/360 Assembler, IBM System/38 and AS/400 Control Language (CL), IBM RPG, Java, Julia, MATLAB, ML, Modula, Modula-2, Modula-3, Morpho, NEWP, Oberon, Oberon-2, Objective-C, OCaml, several Pascal derivatives (Component Pascal, Object Pascal, Turbo Pascal, UCSD Pascal), Perl, PHP, PL/I, PureBasic, Python, R, Ruby,[4] Rust, JavaScript,[5] Visual Basic (.NET) and WebDNA.

If the whole complaint is that you cannot have a bare bones function outside of a class, Java is not alone.

Predating Java by several decades, Smalltalk, StrongTalk, SELF, Eiffel, Sather, BETA.

And naturally lets not forget C#, that came after Java.

Thankfully those days are not with us any more. Java has moved on quite considerably in the last few years.

I think people are still too ready to use massive, hulking frameworks for every little thing, of course, but the worst of the 'enterprise' stuff seems to have been banished.

I hope you are right. I really do. But I have a hunch, that if I accepted any Java job, I would simply have coworkers, who are still stuck with "enterprise" Java ideology, and whose word has more weight than the word of a newcomer. That's one of the fears, that stops me from seriously considering Java shops. Fear of unreasonable coworkers and then being forced to deliver shitty work, that meets their idea of how the code should be written in the most enterprise way they can come up with.

Always makes me think of that AbstractProxyFactorySomething or similar, that I saw in Keycloak, for when you want to implement your own password quality criteria. When you step back a bit and think about what you actually want to have, you realize, that actually all you want is a function, that takes as input a string, and gives as output a boolean, depending on whether the password is strong enough, or fulfills all criteria. Maybe you want to output a list of unmet criteria, if you want to make it complex. But no, it's AbstractProxyFactorySomething.

I don't understand these complaints.

Here is a tiny interface that will do what you need:

    @FunctionalInterface
    public interface IPasswordChecker
    {
        bool isValid(String password);
    }
Now you can trivially declare a lambda that implements the interface.

Example:

    const IPasswordChecker passwordChecker = (String password) -> password.length() >= 16;

I'm personally rather fond of Java, but even this (or the shorter `Predicate`) still can't compete with the straightforward simplicity of a type along the lines of `string -> bool`.

[deleted]

    > that was actively encouraged by the design of the language.
Java hasn't changed that much since the "hellscape" 00s. Is it better now? Or what is specific to the language the encourages "the mess of DAOs and Factories"? You can make all of those same mistakes in Python, C# or C++. I have used Java for about 15 years now and I have never written any of that junky enterprise crap with a million layers of OO.

    > I never use interfaces or the like.
This is the first that I heard any disdain towards interfaces. What is there not to like?

It insists upon itself. That’s really the problem with Java’s design philosophy from that era; it encourages ceremony. Even if you don’t write the full-on "Enterprise™" soup of DAOs, Factories, and ServiceLocators, the language’s type system and conventions gently nudge you toward abstraction layers you don’t actually need.

Interfaces for everything, abstract classes “just in case,” dependency injection frameworks that exist mainly to manage all the interfaces. Java (and often Enterprise C#) is all scaffolding built to appease the compiler and the ideology of “extensibility” before there’s any actual complexity to extend.

You can write clean, functional, concise Java today, especially with records, pattern matching, and lambdas, but the culture around the language was forged in a time when verbosity was king.

I think the best description of this kind of "obfuscation" that especially afflicted Java still is Steve Yegge's "Kingdom of Nouns" rant:

https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...

It's very useful in C++, funnily enough. This is because I can have a non-templated interface base class, then a templated impl class.

Then my templated impl header can be very heavy without killing my build times since only the interface base class is #included.

Not sure if this is as common in Java.

Using more complex architecture (which requires more human time to understand) to merely make build time shorter is a ridiculous choice.

For a large project this could save hours of developer time.

C++ is a hell of a language.

just buy your devs faster computers to compile on

You can't buy your way out of this, because C++ builds are only parallelizable across multiple translation units[1] (i.e. separate .cpp files). Unless you're willing to build a better single-core CPU, there's not much you can do.

The challenge with modern C++ projects is that every individual TU can take forever to build because it involves parsing massive header files. Oftentimes you can make this faster with "unity builds" that combine multiple C++ files into a single TU since the individual .cpp file's build time is negligible compared to your chonky headers.

The reason the header files are so massive is because using a templated entity (function or class) requires seeing the ENTIRE DEFINITION at the point of use, because otherwise the compiler doesn't know if the substitution will be successful. You can't forward declare a templated entity like you would with normal code.[2]

If you want to avoid including these definitions, you create an abstract interface and inherit from that in your templated implementations, then pass the abstract interface around.

[1] or linking with mold

[2] There used to be a feature that allowed forward declaring templated entities called "export". A single compiler tried to implement it and it was such a failure it was removed from the language. https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n14...

Java uses type erasure which are very cheap in compile time but you cannot do things like

   t = new T(); // T is a template parameter class
C++ uses reified generics which are heavy on compile time but allows the above.

That's why they're called generic parameters, not template parameters; the code is generic over all possible parameters, not templated for every possible parameter.

    > C++ uses reified generics
I was a C++ programmer for many years, but I never heard this claim. I asked Google AI and it disagees.

    > does c++ have reified generics?

    > C++ templates do not provide reified generics in the same sense as languages like C# or Java (to a limited extent). Reified generics mean that the type information of generic parameters is available and accessible at runtime.

Interesting I’d never picked up on this pedantic subtlety. I too thought reified meant what you could do at the call site not what you could do at runtime. Was my understanding wrong, or is Gemini hallucinating.

In any event, you have to use weird (I think “unsafe”) reflection tricks to get the type info back at runtime in Java. To the point where it makes you think it’s not supported by the language design but rather a clever accident that someone figured out how to abuse.

Funny, given that interfaces are the good part (especially compared to inheritance).

I didn't mean to imply that interfaces are bad or useless. Just that I don't use them. Probably because I write most of my stuff in Python anymore.

"But but... I can swap out my entire my persistance layer since it's all just an interface!"

Has anyone ever actually done this ?

I have used something similar with effects in Haskell to mock "the real world" for running tests.

But if it was as convoluted to use as it's in Java, I wouldn't. And also, it's not enterprise CRUD. Enterprise CRUD resists complex architectures like nothing else.

> Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.

Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.

Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.

Perhaps you haven't had the opportunity to experience the advantages of using these techniques, or were you mindful of when you benefited from them. We tend to remember the bad parts and assume the good parts are a given. But personal tastes don't refute the value and usefulness of features you never learned to appreciate.

> > Most code bases don't need dynamically loaded objects designed with interfaces that can be swapped out. In fact, that functionality is nearly never useful. But that's how most people wrote Java code.

> Perhaps I'm not following, but dynamically loaded objects are the core feature of shared libraries. Among it's purposes, it allows code to be reused and even updated without having to recompile the project. That's pretty useful.

> Interfaces are also very important. They allow your components to be testable and mockable. You cannot have quality software without these basic testing techniques. Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.

I don't think GP was saying that Dynamically loaded objects are not needed, or that Interfaces are not needed.

I read it more as "Dynamically loaded interfaces that can be swapped out are not needed".

> Also, interfaces are extremely important to allow your components to be easily replaced even at runtime.

The share of all software that actually benefits from this is extremely small. Most web-style software with stateless request/response is better architected for containers and rolling deployments. Most businesses are also completely fine with a few minutes of downtime here and there. For runtime-replacement to be valuable, you need both statefulness and high SLA (99.999+%) requirements.

To be fair, there is indeed a subset of software that is both stateful and with high SLA requirements, where these techniques are useful, so it's good to know about them for those rare cases. There is some pretty compelling software underneath those Java EE servers for the few use-cases that really need them.

But those use-cases are rare.

>You cannot have quality software without these basic testing techniques

Of course you can, wtf?

Mock are often the reason of tests being green and app not working :)

> Of course you can, wtf?

Explain then what is your alternative to unit and integration tests.

> Mock are often the reason of tests being green and app not working :)

I don't think that's a valid assumption. Tests just verify the system under test, and test doubles are there only to provide inputs in a way that isolates your system under test. If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.

Workman and it's tools.

I personally believe mocks are a bad practice which is caused by bad architecture. When different components are intertwined, and you cannot test them in isolation, the good solution is refactoring the code; the bad is using mocks.

For example of such intertwined architecture see Mutter, a window manager of Gnome Shell (a program that manages windows on Linux desktop). A code that handles key presses (accessibility features, shortcuts), needs objects like MetaDisplay or MetaSeat and cannot be tested in isolation; you figuratively need a half of wayland for it to work.

The good tests use black box principle; i.e. they only use public APIs and do not rely on knowledge of inner working of a component. When the component changes, tests do not break. Tests with mocks rely on knowing how component work, which functions it calls; the tests with mocks become brittle, break often and require lot of effort to update when the code changes.

Avoid mocks as much as you can.

It's not necessary to have mocks for unit tests. They can be a useful tool, but they aren't required.

I am fine with having fake implementations and so forth, but the whole "when function X is called with Y arguments, return Z" thing is bad. It leads to very tight coupling of the test code with the implementation, and often means the tests are only testing against the engineer's understanding of what's happening - which is the same thing they coded against in the first place. I've seen GP's example of tests being green but the code not working correctly a number of times because of that.

Most compilers do not use "unit" tests per se. Much more common are integration tests targeted at a particular lowering phase or optimization pass.

This is pretty important since "unit tests" would be far too constraining for reasonable modifications to the compiler, e.g. adding a new pass could change the actual output code without modifying the semantics.

LLVM has "unit" tests

I mean they run single pass with some small llvm ir input and check if the output IR is fine

>Explain then what is your alternative to unit and integration tests.

Tests against real components instead of mocks.

>If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.

Nowadays external components can be very complex systems e.g dbs, messaging queues, 3rd APIs and so on

A lot of things can go wrong and you arent even aware of them in order to get mocks right.

Examples? fuckin emojis.

On mocked in memory database they work fine, but fail on real db due to encoding settings.

I'm not a fan of extensive mocking but you're conflating two rather different test cases. A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration. You should of course have tests towards the database too, but then you'd mock out parts of the application instead and not the database itself.

>A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration.

Wdym?

You're testing e.g simple crude operation, e.g create hn thread

With mocked db it passed, with real db it fails due to encoding issue.

The result is that tests are green, but app does not work.

We're talking about OO Java. You bring up shared libraries, list a bunch of things not unique to Java nor OO, then claim `etc.` benefits.

You really haven't argued anything, so ending on a "you must be personally blind jab" just looks dumb.

It's because the concepts are the same, but people get enraged by the words. What Java calls a factory would be a "plugin loader" in C++. It's the same concept. And most big C++ codebases end up inventing something similar. Windows heavily uses COM which is full of interfaces and factories, but it isn't anything to do with Java.

Java I think gets attacked this way because a lot of developers, especially in the early 2000s, were entering the industry only familiar with scripting languages they'd used for personal hobby projects, and then Java was the first time they encountered languages and projects that involved hundreds of developers. Scripting codebases didn't define interfaces or types for anything even though that limits your project scalability, unit testing was often kinda just missing or very superficial, and there was an ambient assumption that all dependencies are open source and last forever whilst the apps themselves are throwaway.

The Java ecosystem quickly evolved into the enterprise server space and came to make very different assumptions, like:

• Projects last a long time, may churn through thousands of developers over their lifetimes and are used in big mission critical use cases.

• Therefore it's better to impose some rules up front and benefit from the discipline later.

• Dependencies are rare things that create supplier risks, you purchase them at least some of the time, they exist in a competitive market, and they can be transient, e.g. your MQ vendor may go under or be outcompeted by a better one. In turn that means standardized interfaces are useful.

So the Java community focused on standardizing interfaces to big chunky dependencies like relational databases, message queuing engines, app servers and ORMs, whereas the scripting language communities just said YOLO and anyway why would you ever want more than MySQL?

Very different sets of assumptions lead to different styles of coding. And yes it means Java can seem more abstract. You don't send queries to a PostgreSQL or MySQL object, you send it to an abstract Connection which represents standardized functionality, then if you want to use DB specific features you can unwrap it to a vendor specific interface. It makes things easier to port.

I am currently being radicalised against OOP because of one specific senior in my team that uses it relentlessly, no matter the problem domain. I recognise there are problems where OOP is a good abstraction, but there are so many places where it isn't.

I suspect many OOP haters have experienced what I'm currently experiencing, stateful objects for handing calculations that should be stateless, a confusing bag of methods that are sometimes hidden behind getters so you can't even easily tell where the computation is happening, etc

You could write crappy code in any language. I don't think it's specific for Java. Overall I think java is pretty good, especially for big code bases.

But there's a real difference how easy it is to write crappy code in a language. In regards to java that'd be, for example, nullability, or mutability. Kotlin, in comparison, makes those explicit and eliminates some pain points. You'd have to go out of your way and make your code actively worse for it to be on the same level as the same java code.

And then there's a reason they're teaching the "functional core, imperative shell" pattern.

On the other hand, Java's tooling for correctly refactoring at scale is pretty impressive: using IntelliJ, it's pretty tractable to unwind quite a few messes using automatic tools in a way that's hard to match in many languages that are often considered better.

I agree with your point, and I want to second C# and JetBrains Rider here. Whatever refactoring you can with Java in JetBrains IntelliJ, you can do the same with C#/Rider. I have worked on multiple code bases in my career that were 100sK lines of Java and/or C#. Having a great IDE experience was simply a miracle.

The language Kotlin is actually developed by JetBrains

I've found that IntelliJ's refactorings don't work as well for Kotlin as Java but, also, I've avoided Kotlin because I don't like it very much.

You gotta admit, though, that a language which strongarms you into writing classes with hidden state and then extending and composing them endlessly is kinda pushing you in that direction.

It’s certainly possible to write good code in Java but it does still lend itself to abuse by the kind of person that treated Design Patterns as a Bible.

>kind of person that treated Design Patterns as a Bible

I have a vague idea of what the Bible says, but I have my favorite parts that I sometimes get loud about. Specifically, please think really hard before making a Singleton, and then don't do it.

Singletons are so useful in single threaded node land. Configuration objects, DB connection objects that have connection pooling behind them, even my LLM connection is accessed via a Singleton.

OK yeah that's a pretty good general principle. You think you only need one of these? Are you absolutely certain? You SURE? Wrong, you now need two. Or three.

A singleton is more than just, "I only need one of these," it is more of a pattern of "I need there to be only one of these," which is subtly different and much more annoying.

Separation of data and algorithm is so useful. I can't really comment on how your senior is doing it, but in the area of numeric calculations, making numbers know anything about their calcs is a Bad Idea. Even associations with their units or other metadata should be loose. Functional programming provides such a useful intellectual toolkit even if you program in Java.

Sorry to learn, hope you don't get scar tissue from it.

Not sure how many people are writing programs with lots of numeric calculations.

Most programs in my experience are about manipulating records: retrieve something from a database, manipulate it a bit (change values), update it back.

Over here OOP do a good job - you create the data structures that you need to manipulate, but create the exact interface to effect the changes in a way that respect the domain rules.

I do get that this isn't every domain out there and _no size fits all_, but I don't get the OP complaints.

I currently think that most of the anger about OOP is either related to bad practices (overusing) or to lack of knowledge from newcomers. OOP is a tool like any other and can be used wrong.

Creating good reusable abstractions is not easy. It's quite possible to create tarballs of unusuable or overwrought abstractions. That is less of a knock on OOP and more a knock on the developers.

But that is what classes does, it lets you have data lists and dictionaries implemented as a class so that your algorithm doesn't have to understand how the data structure is implemented. In functional programming the algorithm has to be aware of the data structure, I feel that is much worse.

> I recognise there are problems where OOP is a good abstraction, but there are so many places where it isn't.

Exactly. This is the way to think about it, imo. One of those places is GUI frameworks, I think, and there I am fine doing OOP, because I don't have a better idea how to get things done, and most GUI frameworks/toolkits/whatever are designed in an OOP way anyway. Other places I just try to go functional.

I agree. Neither OOP nor functional programming should be treated as a religion or as a paradigm that one must either be fully invested in or not.

OOP is a collection of ideas about how to write code. We should use those ideas when they are useful and ignore them when they are not.

But many people don't want to put in the critical thinking required to do that, so instead they hide behind the shield of "SOLIDD principles" and "best practice" to justify their bad code (not knocking on SOLIDD principles, it's just that people use it to justify making things object oriented when they shouldn't be).

I think the OO hatred comes from how academia and certain enterprise organisations for our industry picked it up and taught it like a religion. Molding an entire generation og developers who wrote some really horrible code because they were taught that abstractions were, always, correct. It obviously weren't so outside those institutions, the world slowly realized that abstractions were in many ways worse for cyclomatic complexity than what came before. Maybe not in a perfect world where people don't write shitty code on a thursday afternoon after a long day of horrible meetings in a long week of having a baby cry every night.

As with everything, there isn't a golden rule to follow. Sometimes OO makes sense, sometimes it doesn't. I rarely use it, or abstractions in general, but there are some things where it's just the right fit.

Much like Agile, or Hungarian notation. When a general principle becomes a religion it ceases to be a good general principle.

> I think the OO hatred comes from how academia and certain enterprise organisations for our industry picked it up and taught it like a religion.

This, this, this. So much this.

Back when I was in uni, Sun had donated basically an entire lab of those computers terminals that you used to sign in to with a smart card (I forgot the name). In exchange, the uni agreed to teach all classes related to programming in Java, and to have the professors certify in Java (never mind the fact that nobody ever used that laboratory because the lab techs had no idea how to work with those terminals).

As a result of this, every class from algorithms, to software architecture felt like like a Java cult indoctrination. One of the professors actually said C was dead because Java was clearly superior.

> One of the professors actually said C was dead because Java was clearly superior.

In our uni (around 1998/99) all professors said that except the Haskell teacher who indeed called Java a mistake (but c also).

Turns out everyone was completely wrong except for that one guy working in Haskell.

Tale as old as time.

Java was probably close to 50% of the job market at some point in the 2000s and C significantly dried up with C++ taking its place. So I'm afraid everyone was right actually.

To be honest, I'm convinced the reason so many people dislike Java is because they have had to use it in a professional context only. It's not really a hobbyist language.

Just for the record, I don't think C ever dried up in the embedded space. And the embedded space is waaaay bigger than most people realise, because almost all of it is proprietary, so very little "leaks" onto the public interwebs.

Believe it or not but there is plenty of Java and C++ in the embedded space. It’s far from being a C fortress.

Probably the Sun Ray computer.

https://en.wikipedia.org/wiki/Sun_Ray

This was it!

And now you know how Nvidia CUDA got so popular.

Tried to modify one boolean in a codebase a few weeks ago and I had to go thru like 12 levels of indirection to find "the code that actually runs".

tourist2d seems to have triggered some moderation trap, but wrote:

> Sounds like a problem with poor code rather than something unique to OOP.

And yeah, OO may lean a bit towards more indirection, but it definitely doesn't force you to write code like that. If you go through too many levels, that's entirely on the developer.

Sounds like a problem with poor code rather than something unique to OOP.

  > I can't imagine building something like a graphics framework without some subtyping.
Let me introduce you to Fudgets, an I/O and GUI framework for Haskell: https://en.wikipedia.org/wiki/Fudgets

They use higher order types to implement subtyping as a library, with combinators. For example, you can take your fudget that does not (fully) implement some functionality, wrap it into another one that does (or knows how to) implement it and have a combined fudget that fully implements what you need. Much like parsing combinators.

(Hi Andrew)

It's the misuse of OO constructs that gives it a bad name, almost always that is inheritance being overused/misused. Encapsulation and modularity are important for larger code bases, and polymorphism is useful for making code simpler, smaller and more understandable.

Maybe the extra long names in java also don't help too, along with the overuse/forced use of patterns? At least it's not Hungarian notation.

Heck, I love the long names. I know, I also hate FooBarSpecializedFactory, but that's waaaay better than FBSpecFac.

A sample: pandas loc, iloc etc. Or Haskell scanl1. Or Scheme's cdr and car. (I know - most of the latest examples are common functions that you'll learn after a while, but still, reading it at first is terrible).

My first contact with a modern OO language was C# after years of C++. And I remember how I thought it awkward that the codebase looked like everything was spelled out. Until I realize that it is easier to read, and that's the main quality for a codebase.

Objective-C says hello in extra long names are concerned.

> CMMetadataFormatDescriptionCreateWithMetadataFormatDescriptionAndMetadataSpecifications(allocator:sourceDescription:metadataSpecifications:formatDescriptionOut:)

https://developer.apple.com/documentation/coremedia/cmmetada...:)

Jason! Couldn't agree more.

OOP is just not how computers work.

Computers work on data. Every single software problem is a data problem. Learning to think about problems in a data oriented way will make you a better developer and will make many difficult problems easier to think about and to write software to solve.

In addition to that, data oriented software almost inherently runs faster because it uses the cache more efficiently.

The objects that fall out of data oriented development represent what is actually going on inside the application instead of how an observer would model it naively.

I really like data oriented development and I wish I had examples I could show, but they are all $employer’s.

As a reverse engineer, I totally get the phrase.

Even with non-obfuscated code, if you're working with a decompilation you don't get any of the accompanying code comments or documentation. The more abstractions are present, the harder it is to understand what's going on. And, the harder it is to figure out what code changes are needed to implement your desired feature.

C++ vtables are especially annoying. You can see the dispatch, but it's really hard to find the corresponding implementation from static analysis alone. If I had to choose between "no variable names" and "no vtables", I'd pick the latter.

Vtables can be annoying to follow through, but try reverse-engineering an Objective-C binary! Everything is dispatched dynamically, so 99% of the call graph ends in objc_msgSend(). Good luck figuring out what the message is, and the class of the object receiving it.

Isn't that easy? The message is a string in one of the register parameters to it.

> Everything is dispatched dynamically

Well, not everything, there is NS_DIRECT. The reason for that being that dynamic dispatch is expensive - you have to keep a lot of metadata about it in the heap for sometimes rarely-used messages. (It's not about CPU usage.)

It’s all about the data model and the architecture.

I think people focus a lot on inheritance but the core idea of OO is more the grouping of values and functions. Conceptually, you think about how methods transforms the data you are manipulating and that’s a useful way to think about programs.

This complexity doesn’t really disappear when you leave OO language actually. The way most complex Ocaml programs are structured with modules grouping one main type and the functions working on it is in a lot of way inspired by OO.

> grouping of values and functions

Encapsulation.

Which I think is misunderstood a lot, both by practitioners and critics.

You're right, it is all about the problem domain. Unfortunately, there was a solid decade where that was not the typical advice, and OO was pushed (in industry and in education) as the last word in programming, suitable for all tasks. There's a generation out there who was taught programming as "instantiate a truck object that inherits from a car object" and another generation who was required to implement math using OOP principles instead of just doing math. Programming languages that did not have object models suddenly developed them, often incompatibly with the rest of the language. So, while I think that OO has its places, I understand why there's a lot of visceral response to it online.

From my pov, both inheritance and encapsulation aren't great if you have to maintain code and add new one.

Also, I dislike design patterns overuse, DDD done Uncle Bob style.

Also we can think of where OOP drives many teams to:

https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo...

https://factoryfactoryfactory.net/

https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

    > https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition
This! Everytime I see this project, I laugh out loud. The description reads:

    > FizzBuzz Enterprise Edition is a no-nonsense implementation of FizzBuzz made by serious businessmen for serious business purposes.
I mean come on, these guys are serious!

> I can't imagine building something like a graphics framework without some subtyping.

While React technically uses some OOP, in practice it's a pretty non-OOP way do UI. Same with e.g. ImGUI (C++), Clay (C). I suppose for the React case there's still an OOP thing called the DOM underneath, but that's pretty abstracted.

In practice most of the useful parts of OOP can be done with a "bag/record of functions". (Though not all. OCaml has some interesting stuff wrt. the FP+OOP combo which hasn't been done elsewhere, but that may just be because it wasn't ultimately all that useful.)

React is a kind of strange dysfunctional OOP pretending not to be, to appeal to people like those on this thread ;)

Function calls have state, in React. Think about that for a second! It totally breaks the most basic parts of programming theory taught in day one of any coding class. The resulting concepts map pretty closely:

• React function -> instantiate or access a previously instantiated object.

• useState -> define an object field

• Code inside the function: constructor logic

• Return value: effectively a getResult() style method

The difference is that the underlying stateful objects implemented in OOP using inheritance (check out the blink code) is covered up with the vdom diffing. It's a very complicated and indirect way to do a bunch of method calls on stateful objects.

The React model doesn't work for a lot of things. I just Googled [react editor component] and the first hit is https://primereact.org/editor/ which appears to be an ultra-thin wrapper around a library called Quill. Quill isn't a React component, it's a completely conventional OOP library. That's because modelling a rich text editor as a React component would be weird and awkward. The data structures used for the model aren't ideal for direct modification or exposure. You really need the encapsulation provided by objects with properties and methods.

React is most likely not what the author had in mind by a graphics framework. The browser implementation of the DOM or a desktop widget system is much more likely the idea.

While using a OOP language.

    Welcome to Node.js v24.10.0.
    Type ".help" for more information.
    > const fn = (x) => x + x
    undefined
    > typeof(fn)
    'function'
    > Object.getOwnPropertyNames(fn)
    [ 'length', 'name' ]
    > fn.name
    'fn'
    > fn.length
    1
    > Object.getPrototypeOf(fn)
    [Function (anonymous)] Object

Yeah, I agree with you, and actually like OOP where it's appropriate.

Unfortunately there were so many bad examples from the old Java "every thing needs a dozen factories and thousands of interfaces" days that most people haven't seen the cases where it works well.

If everyone does it wrong, then that alone means it itself is wrong.

Everyone? Really, that's your take? Most code out there is OOP and I find it hard to believe that everything is wrong.

Most food out there is McDonalds.

Indeed. "The purpose of a system is what it does."

I find inheritance works best when you model things that don't exist in reality, but only as software concepts, for example, an AbstractList, Buffer or GUI component.

Really like this concept!

That's why we use depedency injection now~~!

I've always wanted my editor's go-to functionality to take me to an abstract class instead of the place where the actual logic resides. Good times.

Any modern IDE will let you immediately bring up the subclasses with a single hotkey. If you have an abstract class with only a single subclass and that's not because new code is going to be added soon then yes, it's a bad design decision. Fortunately, also easy to fix with good IDEs.

In my last project every class had a corresponding abstract class, and then we used DI to use the real class. Good to be rid of it.

It's all about the problem domain imo. I can't imagine building something like a graphics framework without some subtyping.

The keyword being "some".

Yes, there are those who can use OOP responsibly, but in my (fortunately short) experience with Enterprise Java, they are outnumbered by the cargo-cult dogma of architecture astronauts who advocate a "more is better" approach to abstraction and design patterns. That's how you end up with things like AbstractSingletonProxyFactoryBean.

[dead]

Tbh decompiling software and figuring out how it works isn’t easy but that is part of the fun :) - it’s the reason ive ended up following many of the weird paths in computing that I have

Agreed; modding obfuscated Java is impressive, but not quite on the level of modding in the (Nintendo) emulation community. The things that have been achieved with classic Nintendo titles are absurd, like adding high-performance online multiplayer to Super Smash Bro. Melee.

Said online multiplayer [0].

The devs also wrote a write-up here about how they handle the desyncs in netcode [1].

[0] https://slippi.gg/

[1] https://medium.com/project-slippi/fighting-desyncs-in-melee-...

Yeah, I honestly think that the thing which would kill modding in the future won't be any kind of (overtly) hostile action, it will simply be sheer inertia. Since the new "drop" system, pretty much every minor version requires rewriting many things in your mod, and modders find it hard to keep up, and there will be point when many mods just don't bother updating to the newest version.

The game used to be simple, both conceptually and codewise but obviously, it became more and more bloated the more developers touched it and the more bureaucracy was added. Now, it's a complete nightmare, and I bet it's also a nightmare for the developers too, considering how hard it is for them to fix even basic issues which have been in the game for like a decade at this point.

indeed. with how good and cheap/free decompilers have gotten over the years my preferred way to read abstraction-happy c++ and rust code is to compile it with optimisations and debug symbols and then read the decompiler output.

But you don’t understand, it enables code re-use…

You have to have Factories and inheritence..

/s

To be fair, since 2019 Mojang has been providing the mappings instead of everyone having to use community-created ones.

Very few people use mojang mappings -- the two big modloaders, forge and fabric (and their derivatives) have their own mappings respectively, due to the restrictions of the mojang mappings. It's possible to use the mojang mappings, but much less common.

PaperMC exclusively uses Mojang mappings, and it's the most popular loader for server-side modding these days.

Paper isn’t a mod loader, it uses Fabric under the hood. Also, what makes you think it’s the most popular server? I thought it was fading. I switched my server from Paper to Fabric years ago.

It doesn't use Fabric under the hood, where did you hear that?

> Also, what makes you think it’s the most popular server?

Because it's the only server software that can actually scale and support a long-term server with feature and bugfix stability. Its popularity bears out in what hosting companies say people are most commonly using. Though I'm not sure if there is a specific publicly published statistic to point to to prove this - there is bStats global stats, but it is biased towards the Paper ecosystem.

Fabric is getting close with certain optimization and bugfixing mods, but it's still not there. Paper has a checklist of what optimizations and fixes must be included for a release to proceed, whereas Fabric ecosystem is still a hodgepodge of different things that are only available on specific Minecraft versions.

I've recently been setting up a velocity server network for some friends and friends of friends, and I agree with your findings. I don't have much history on Forge vs Paper vs Fabric vs..... (and found it all very overwhelming, honestly) but from what I can tell, the popular sites like modrinth have communities way more focused around Forge/NeoForge.

Paper does seem to have it's own site for plugins, hangar or something? (Don't have my web history on this PC) but the community support doesn't seem nearly as fleshed out.

It is incredible though, before 1.21 the last time I played around with MC server hosting was probably around 1.8 days, when it seemed like you only had Bukkit and a few plugins for it

Paper is custom server software and could be easily argued to be a mod loader if you consider plugins to be mods (although it’s probably a weak argument since there’s no mixin support built-in, although some large servers have added mixin support to their own Paper forks). However, it does not use Fabric under the hood (it’s based on Bukkit/CraftBukkit). By playercount, it is the largest (custom, standalone) MC server software in the world.

I tend to think the distinction between "plugins" and "server-side mods" is a little pointless these days. I would consider something a "mod" if it's in an environment where it can deeply touch Mojang code and completely transform it if needed. And before we ever had Fabric/Sponge mixins, we had reflection and ASM for doing just this. We still have that, and a lot of Paper plugins make extensive use of reflection - particularly libraries that reflect into netty to hook directly into the protocol are quite common.

You’re right, my bad, Spigot (from Bukkit), not Fabric. I got the impression it’s actually using ~~Fabric~~ Spigot code for this because you’re using plugins compiled for Spigot and both a paper.whatever and spigot.whatever config file, but after looking it up I see that they forked it.

I’m not really clear on mod vs plugin vs mixin, I was just trying to refer to whatever software does the decompilation work rather than just consuming APIs provided by projects that do.

Sounds like it’s correct that Paper didn’t do its own mod API, but incorrect that Paper doesn’t do its own decompilation work.

> By playercount, it is the largest (custom, standalone) MC server software in the world.

Do you have a source on this? Not trying to accuse you of anything, I just know that a few servers claim this, and don’t know if we have reliable numbers.

Ah, I was aware of the different Fabric (Yarn) mappings and internal names (due to the few mods like architectury) but I think Forge switched over to Mojang's?

> As of 1.16.5 [(2021)], Forge will be using Mojang’s Official Mappings, or MojMaps, for the forseeable future

Pretty sure this applies to NeoForge as well: https://neoforged.net/personal/sciwhiz12/what-are-mappings/

(Neo)Forge primarily use either mojmaps or Parchment, which are the Mojang mappings with some extra goodies like docstrings and parameter names

It took me a while to find how to obtain the official mappings, but this article seems to have instructions: https://minescript.net/mappings

According to the article, official mappings can be found here: https://piston-meta.mojang.com/mc/game/version_manifest_v2.j...

They're also linked on the wiki page for each release, along with links to the client and server jars: https://minecraft.wiki/w/Java_Edition_1.21.5

Why do they obfuscate if they're just going to provide the mappings?

Proguard can also apply optimizations while it obfuscates. I think a good JVM will eventually do most of them itself, but it can help code size and warm-up. I'm guessing as JVMs get better and everyone is less sensitive to file sizes, this matters less and less.

And there's no way to do only the optimisation part? Surely you could optimise without messing up class and method names..?

One of the biggest optimizations it offers is shrinking the size of the classes by obfuscating the names. If you're obfuscating the names anyway, there's no reason that the names have to be the same length.

"hn$z" is a heck of a lot smaller than "tld.organization.product.domain.concern.ClassName"

So we're not talking about runtime performance, but some minor improvement in loading times? I assume that once the JVM has read the bytecode, it has its own efficient in-memory structures to track references to classes rather than using a hash map with fully qualified names as keys

Proguard was heavily influenced by the needs of early Android devices, where memory was at a real premium. Reducing the size of static tables of strings is a worthwhile optimisation in that environment

Okay but we're talking about Minecraft on desktops and laptops, where the relevant optimizations would be runtime performance optimizations, no?

Probably, but proguard tends to bundle the whole lot together

Even a hash map with fully qualified names as keys wouldn't be so bad because Stirng is immutable in Java, so the hash code can be cached on the object.

The names need to be stored somewhere because they are exposed to the program that way

They have to be stored somewhere, but they don't have to be what the JVM uses when it e.g performs a function call at runtime. Just having the names in memory doesn't slow down program execution.

At runtime this is going to be a branch instruction yes

Yeah in some ways the obfuscation and mappings are similar to minification and sourcemaps in javascript.

And minification in JavaScript only reduces the amount of bytes that has to be sent over the wire, it doesn't improve runtime performance.

According to the v8 devs it also can increase parsing performance

> Our scanner can only do so much however. As a developer you can further improve parsing performance by increasing the information density of your programs. The easiest way to do so is by minifying your source code, stripping out unnecessary whitespace, and to avoid non-ASCII identifiers where possible.

https://v8.dev/blog/scanner

Sure, but that's also just in the category improving loading a bit. It doesn't have anything to do with runtime performance.

Well, maybe that's why they're not obfuscating anymore.

In 2004 I played an MMO game on a pirated server. The owner of the server somehow got a version of the server binary, and used a hex editor (!) to add new features to the binary over time.

It's the closest I've ever see to someone literally being one of the hackers from Matrix, literally staring at hexadecimal and changing chars one at a time

Presumably they were using a decompiler e.g. IDA Pro to know what characters to change in the hex editor? I've done that before to find offsets in the binary to NOP out some function calls.

I just remember when I cracked Space Empires III shareware as child. I didn't release the crack. Plus was a bit crappy, needing every time that I loaded the game, write a wrong serial so the check thought that was right serial code. I simple changed a few x86 opcodes to invert the check condition...

That's a level of dedication that I have never devoted to anything in my life.

That's energy that could change the world if harnessed correctly.

It did change the world - it made it better for players of the game.

That approach is also super useful if you're manually flashing an image onto some embedded thing (like an ECU, or other types of boot rom). Of course on many modern systems you'll have to get around the checksum guards, but there's typically all sorts of glitch hacks to do that.

Wasn't that WoW? I vaguely recall that a lot of the private servers worked off of a copied and / or decompiled version of their own server software for years, which is also why they never went further than the WotLK expansion. (the other part of that was people didn't want to, but that's another discussion)

AFAIK the WoW server code was never leaked. All private server code bases were developed by reverse engineering network traffic, game behaviour and client asset files.

E.g. the code for AzerothCore is fully available (and very easy to run on very low spec hardware)

https://github.com/azerothcore/azerothcore-wotlk

Me too. Having only a vague familiarity with the game, I thought that mods were using some official plugin system. I had no idea that minecraft modders (presumably kids/teens?) were not only reverse engineering things but also creating an entire ecosystem to work around proguard.

Over time people learned the key APIs and classes that you needed to interact with. And obfuscated Java is like an order of magnitude easier to work with than machine code. Once someone figured out how to do something it was generally pretty easy to use that interface to do your own thing. Modders of course still often hit edge cases that required more reversing, but yeah, it was really cool to watch over the last 15+ years :)

Not only working around proguard, but Minecraft mods are built on top of an incredibly cool and flexible runtime class file rewriting framework that means that each JAR can use simple declarative annotations like @Inject to rewrite minecraft methods on the fly when their mod is loaded or unloaded. This massively revolutionized mod development, which was previously reliant on tens of thousands of lines of manually compiled patches that would create "modding APIs" for developers to use. Putting the patching tools in the hands of the mod developers has really opened up so many more doors.

Minecraft also has a plugin system based around JSON file datapacks, but it's a lot more limited. It's more at the level of scope of adding a few cool features to custom maps then completely modding the game.

The devs for Java Edition really have mods in mind nowadays.

- They left in the code debug features that they used to strip out.

- They left in the code their testing infrastructure that they used to strip out as well.

- They started making everything namespaced to differentiate contents between mods (like in this week's snapshot they made gamerules namespaced with the "minecraft:" prefix like items and blocks and whatnot)

- They are adding a lot more "building blocks" type features that both allow new /easier things in datapacks, and in mods as well.

Method patching with Mixins is less needed now because the game's internal APIs are more versatile than ever.

That's definitely true, and I think that's a testament to Minecraft / Java's strong OO design—it dovetails very nicely with the Open/Close principle. However my view is that for a mod to be a mod, there's always going to be stuff that you can't/shouldn't implement just with datapacks—whether that's complex rendering features, new entity logic, or whatever. The Mixin processor makes it really easy to build these kinds of features in a very compatible way

These tools sound very powerful, could they find use for other Java codebases?

I don't know where you'd use it besides modding, but it is a general-purpose framework: https://github.com/SpongePowered/Mixin

Other codebases don't tend to need those tools, because they already use frameworks like Spring or Micronaut which have such features built-in. Usually without bytecode rewriting and with more concern given to API definition.

For example, in Micronaut (which is what I'm more familiar with) you can use @Replace or a BeanCreatedListener to swap out objects at injection time with compatible objects you provide. If a use-site injects Collection<SomeInterface> you can just implement that interface yourself, annotate your class with @Singleton or @Prototype and now your object will appear in those collections. You can use @Order to control the ordering of that collection too to ensure your code runs before the other implementations. And so on - there's lots of ways to write code that modifies the execution of other code, whilst still being understandable and debuggable.

You still need quite a lot of mixins / modified code to actually do useful things. Mojang isn't always making things unnecessarily extensible, just extensible enough for them to keep updating the game.

They've also been working with a lot of modders on the rendering engine over the past year or two.

> I had no idea that minecraft modders (presumably kids/teens?) [...]

Players who were teenagers when the game first came out are now 29 to 35 years old. It's a pretty ancient game at this point. From my experience, most contemporary modders are in their late 20s.

We're still relying on legacy code written by inexperienced kids, though...

There is and kind of isn't. There are community led modding apis, but also datapacks that are more limited but still allow someone to do cool stuff leveraging tools, items, etc already in the game.

If you remember entire contraptions of command blocks doing stuff like playing Pokemon Red in Minecraft or "one commands" that summoned an entire obelisk of command blocks, the introduction of datapacks pretty much replaced both of those.

Datapacks kind of ruined the Java side. Instead of (metaphorically) "grass.colour = green;" now you have several layers of indirection to look up the mapping of block types to colours, in several real and virtual entity-attribute-value stores that shadow and inherit from each other, and the easiest way to make grass green becomes to write a data pack - with all the limitations of that, not to mention the obscure syntax (do you write "{"inherits":"dirt", "variables":{"colour":"green"}}" in stuff/things/blocks/grass/index.html.json?)

This is called the inner-platform effect, where in order to avoid programming in the original language, you invent a worse programming language. Apparently it used to be a big killer of enterprise software. It's also one of the reasons Minecraft needs ten times the RAM it used to. To be fair, we have fifty times as much RAM as we did when Minecraft came out, but wouldn't you rather have it put to use doing extended view distance, extended world height, and shaders?

I remember Notch saying in 2010 that he planned to add an official modding API, but it never actually happened.

---

Edit: https://web.archive.org/web/20100708183651/http://notch.tumb...

Data packs were released in October 2017! And we had command blocks in 2012 for custom maps

[flagged]

One thing I always notice about this kind of post is that it never only has one accusation of oppressing a demographic group. There's always three of them at once.

Makes it feel lightweight I think.

Turns out people who are like that rarely constrain it to one demographic.

Notch's problematic behavior and views are well-known in the community and both Mojang and Microsoft have had to distance themselves from him. To the point that they had to remove all instances of "notch" in the codebase

Here's some examples, particularly of his antisemitism to better illustrate the issues

https://xcancel.com/jacqui_Val/status/1111080126345826305

[deleted]

I'm sure you're fun at parties

Most modders aren't reverse engineering the game. There's a small community that are doing the obfuscation and then everyone else is effectively working from normal Java code.

It's that way for most modding scenes. Someone makes an API/mod loader which makes it easy, then a lot of enthusiastic players make mods.

I wonder how much overlap Minecraft modders have with the Android custom ROM/app-modding community, another thing that the easy "reversibility" of Java has spawned.

Yeah you got it backwards. Mojang refused to add a modding API, because Notch knew that the community has more freedom the way things currently are.

Actually more common than you might think.

Bethesda games have the same ecosystem - they do provide an official plugin system, but since modders aren't content with the restrictions of that system, they reverse engineered the game(s, this has been going on since Oblivion) and made a script extender that hacks the game in-memory to inject scripts (hence the name).

While I don't doubt that some mods are created by teens, just under half of Minecraft players are adults.

Java is pretty easy to decompile and it's not a huge amount of effort to poke into the generated JVM code and start doing things. If you have a decent idea of how VMs work, C-like languages work, and how object dispatch works it's really not that hard. Also the early modding scene for Minecraft was really fun. I was a huge Minecraft player at the time and was early into the deobfuscating -> modding scene and the community was one of the most fun computing communities I've been in. Due to how focused it was on the game and its output it wasn't bogged down in nearly as much bikeshedding and philosophy as most FOSS projects get. Honestly one of the highlights of the coding I've done in my life.

Decompiling Java is trivial, as Java bytecode more or less maps directly to textual Java code. Deobfuscating is a monumental manual effort.

I am terrified by Minecraft mods always being distributed from dodgy download sites and not rarely come with their own Windows EXE installers. And as far as I know there is no sandboxing at all in the game (uhm, no pun intended) so once installed the mod has full access to your computer?

As someone whose kid has pulled me into the world of using mods (though not (yet) making them for Java Edition) I think this PSA is worth sharing of how to use minecraft mods without pain and with minimal risk, in case anyone is getting started, or has gotten started and finds it frustrating:

1. Use MultiMC to manage instances with various mods, since mods are rarely compatible with each other, and since each version of a mod only is compatible with a single specific point release of the game itself.

Never download any EXE files to get a mod, that does sound sketch AF.

2. mods are always packaged for a particular Loader (some package for multiples and some require Forge, Fabric, or NeoForge), and MultiMC can install any of them into a given instance. Aside from different startup screens there seems to be no difference so idk why we need 3 different ones.

3. Curseforge's website and modrinth both seem to be legit places to get mods from. I personally find the installable Curseforge program itself to be bad and spammy, and would never use that, but the site still lets you directly download the jars you need, and lets you check "Dependencies" to find out what other mods you need.

If you're using MuliMC or one of its various forks, you can search for and install mods from modrinth or curseforge right in the launcher. I fine it more convienent than doing it with a browser and dragging them in, but either way works.

Curseforge is OK, Modrinth is a less commercial alternative. The ten first Google hits if you search "Minecraft mods" are probably NOT OK, most Minecraft-related stuff is SEO optimized to hell by sites which are very fishy.

There are actually two versions of the Curseforge client, the "Overwolf" version that is built on that platform (and is quite bad as a result) and a newer standalone version that doesn't use Overwolf, it's much better.

> 3. Curseforge's website and modrinth both seem to be legit places to get mods from. I personally find the installable Curseforge program itself to be bad and spammy, and would never use that, but the site still lets you directly download the jars you need, and lets you check "Dependencies" to find out what other mods you need.

PrismLauncher, a popular MultiMC fork, has direct integration with Curseforge and Modrinth, while being completely ad-free. Best of both worlds.

A few mods are not available because Curseforge allows mod authors the option to force ad monetization by blocking API access, but these are few and far between.

PrismLauncher is excellent, it feels like it found the right level of abstraction. Automates chores without black-boxing what it's doing.

And there's a makedeb for it! https://mpr.makedeb.org/packages/prismlauncher

Yeah mods are just regular Java .jars that can do anything. To circumvent this issue Mojang introduced datapacks but they are super limited in what they can do. They’re basically just Minecraft commands in a file along with some manifest files to change e.g. mob loot drop rates. These Minecraft commands are Turing complete but a huge PITA to work with directly, no concept of local variables or if statements, no network access, etc. Every entity in MC has associated NBT data that is similar to JSON that stores values like position, velocity, inventory, etc. You can change NBT with commands for mobs, but in what can only be described as a religious decision, Minecraft commands are unable to modify player NBT. So for example it is impossible to impart a velocity on a player.

One wonders why Mojang didn’t embed Lua or Python or something and instead hand-rolled an even shittier version of Bash. The only reason MC servers like Hypixel exist is because the community developed an API on top of the vanilla jar that makes plugin development easy. Even with that there is still no way for servers to run client-side code, severely limiting what you can do. They could’ve easily captured all of Roblox’s marketshare but just let that opportunity slip through their fingers. Through this and a series of other boneheaded decisions (huge breaking changes, changes to the base game, lack of optimization), they have seriously fractured their ecosystem:

- PvP is in 1.8 (a version from 2015) or sometimes even 1.7 (from 2013)

- Some technical Minecraft is latest, some is in 1.12 (from 2017)

- Adventure maps are latest version

- Casual players play Bedrock (an entirely different codebase!)

The words “stable API” have never been said in the Mojang offices. So the community made their own for different versions, servers use the Bukkit 1.8 API, client 1.8 mods use Forge, latest mods use Forge or Fabric. The deobfuscated names are of little utility because the old names are so well ingrained, and modders will also probably avoid them for legal reasons.

Bedrock has proper mod support and you can program with Typescript.

Better than datapacks overall but lacks a way to plug into the rendering pipeline or make custom dimensions. Java mods have more capabilities

> I am terrified by Minecraft mods always being distributed from dodgy download sites and not rarely come with their own Windows EXE installers.

That's not their main mean of distribution, most often those sites were just third parties unrelated to the mod authors that repackaged the mod and somehow got a better SEO. But TBF back in the days the UX/UI for installing mods was pretty terrible. Nowadays there are more standardized and moderated distribution websites from which you just download the .jar of the mod.

> And as far as I know there is no sandboxing at all in the game (uhm, no pun intended) so once installed the mod has full access to your computer?

This is totally true though.

This is not the norm these days! There are popular mod loaders like curseforge that pulled from moderated repositories. It’s still not bulletproof, but a far cry from trusting some installer executable

I prefer modrinth as well, both are good but curseforge has done some things which makes us require an api etc. for true automation where modrinth is genuinely nice.

I used to use prism launcher which would just give me a search box and It on the side would have things like modrinth / curseforge etc., Usually I preferred Modrinth but there were some modpacks just on curseforge only but I never really downloaded a shady modpack from some random website aside from these two, In fact sometimes I never opened up a website but just prismlauncher itself lol

+1 for Prism Launcher and Modrinth! I use Prism on my Steam Deck. I would’ve mentioned them both but Curseforge was the only name I could remember

Yup very common to take a popular minecraft mod, insert malware, rehost it, and seo your way into getting downloads.

Yes, much like how most software for PC has been written since the beginning of time?

I watched one of my young children power themselves through the obfuscation to learn advanced modding. There was zeal for the knowledge and mods in that community.

Mod developers were able to get the source code for Minecraft through a developer program over a decade ago. I'm not sure that it is still the case. I think they are just de-obfuscating the compiled CLASS files so anyone can decompile them without access to the source.

This was how many Runescape bots were developed back in the OSRS days. At some point (RS2?) they made the client super thin so there were no longer methods for high level game functionality (walk to here, get amount of gold in inventory, etc.).

To be fair, the tooling existed before Minecraft and they published obfuscation maps that map the obfuscated names to the non obfuscated ones.

They only published the mappings starting from 2019.

it's actually pretty trivial and something a single person can do I had to rebuild a server jar to source since the guy maintaining it disappeared and it had special behaviors in it that were relied upon for the game networks playability.

[deleted]