> Of course you can, wtf?

Explain then what is your alternative to unit and integration tests.

> Mock are often the reason of tests being green and app not working :)

I don't think that's a valid assumption. Tests just verify the system under test, and test doubles are there only to provide inputs in a way that isolates your system under test. If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.

Workman and it's tools.

I personally believe mocks are a bad practice which is caused by bad architecture. When different components are intertwined, and you cannot test them in isolation, the good solution is refactoring the code; the bad is using mocks.

For example of such intertwined architecture see Mutter, a window manager of Gnome Shell (a program that manages windows on Linux desktop). A code that handles key presses (accessibility features, shortcuts), needs objects like MetaDisplay or MetaSeat and cannot be tested in isolation; you figuratively need a half of wayland for it to work.

The good tests use black box principle; i.e. they only use public APIs and do not rely on knowledge of inner working of a component. When the component changes, tests do not break. Tests with mocks rely on knowing how component work, which functions it calls; the tests with mocks become brittle, break often and require lot of effort to update when the code changes.

Avoid mocks as much as you can.

It's not necessary to have mocks for unit tests. They can be a useful tool, but they aren't required.

I am fine with having fake implementations and so forth, but the whole "when function X is called with Y arguments, return Z" thing is bad. It leads to very tight coupling of the test code with the implementation, and often means the tests are only testing against the engineer's understanding of what's happening - which is the same thing they coded against in the first place. I've seen GP's example of tests being green but the code not working correctly a number of times because of that.

Most compilers do not use "unit" tests per se. Much more common are integration tests targeted at a particular lowering phase or optimization pass.

This is pretty important since "unit tests" would be far too constraining for reasonable modifications to the compiler, e.g. adding a new pass could change the actual output code without modifying the semantics.

LLVM has "unit" tests

I mean they run single pass with some small llvm ir input and check if the output IR is fine

>Explain then what is your alternative to unit and integration tests.

Tests against real components instead of mocks.

>If your tests either leave out invariants that are behind bugs and regressions or have invalid/insufficient inputs, the problem lies in how you created tests, not in the concept of a mock.

Nowadays external components can be very complex systems e.g dbs, messaging queues, 3rd APIs and so on

A lot of things can go wrong and you arent even aware of them in order to get mocks right.

Examples? fuckin emojis.

On mocked in memory database they work fine, but fail on real db due to encoding settings.

I'm not a fan of extensive mocking but you're conflating two rather different test cases. A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration. You should of course have tests towards the database too, but then you'd mock out parts of the application instead and not the database itself.

>A test where you mock out the database connection is testing something in application code, which has nothing to do with database configuration.

Wdym?

You're testing e.g simple crude operation, e.g create hn thread

With mocked db it passed, with real db it fails due to encoding issue.

The result is that tests are green, but app does not work.