Just skimmed the PR, I'm sure the author knows more than I - but why hard code a date at all? Why not do something like `today + 1 year`?

That introduces dependency of a clock which might be undesirable, just had a similar problem where i also went for hardcoding for that reason.

There's already a clock dependency. The test fails because of that.

Arguably you should have a fixed start date for any given test, but time is quite hard to abstract out like that (there's enough time APIs you'd want OS support, but linux for example doesn't support clock namespaces for the realtime clock, only a few monotonic clocks)

Because it should be `today + 1 year + randomInt(1,42) days`.

Always include some randomness in test values.

Generate fuzz tests using random values with a fixed seed, sure, but using random values in tests that run on CI seems like a recipe for hard-to-reproduce flaky builds unless you have really good logging.

Not a good idea for CI tests. It will just make things flaky and gum up your PR/release process. Randomness or any form of nondeterminism should be in a different set of fuzzing tests (if you must use an RNG, a deterministic one is fine for CI).

That's why it's "randomInt(1,42)", not "randomLong()".

if it makes thing flaky

then it actually is a huge success

because it found a bug you overlooked in both impl. and tests

at least iff we speak about unit tests

> Always include some randomness in test values.

If this isn't a joke, I'd be very interested in the reasoning behind that statement, and whether or not there are some qualifications on when it applies.

humans are very good at overlooking edge cases, off by one errors etc.

so if you generate test data randomly you have a higher chance of "accidentally" running into overlooked edge cases

you could say there is a "adding more random -> cost" ladder like

- no randomness, no cost, nothing gained

- a bit of randomness, very small cost, very rarely beneficial (<- doable in unit tests)

- (limited) prop testing, high cost (test runs multiple times with many random values), decent chance to find incorrect edge cases (<- can be barely doable in unit tests, if limited enough, often feature gates as too expensive)

- (full) prop testing/fuzzing, very very high cost, very high chance incorrect edge cases are found IFF the domain isn't too large (<- a full test run might need days to complete)

I've learnt that if a test only fails sometimes, it can take a long time for somebody to actually investigate the cause,in the meantime it's written off as just another flaky test. If there really is a bug, it will probably surface sooner in production than it gets fixed.

sadly yes

people often take flaky test way less serious then they should

I had multiple bigger production issues which had been caught by tests >1 month before they happened in production, but where written off as flaky tests (ironically this was also not related to any random test data but more load/race condition related things which failed when too many tests which created full separate tenants for isolation happened to run at the same time).

And in some CI environments flaky test are too painful, so using "actual" random data isn't viable and a fixed seed has to be used on CI (that is if you can, because too much libs/tools/etc. do not allow that). At least for "merge approval" runs. That many CI systems suck badly the moment you project and team size isn't around the size of a toy project doesn't help either.

Must be some Mandela effect about some TDD documentation I read a long time ago.

If you test math_add(1,2) and it returns 3, you don't know if the code does `return 3` or `return x+y`.

It seems I might need to revise my view.

I vaguely remember the same advice, it's pretty old. How you use the randomness is test specific, for example in math_add() it'd be something like:

  jitter = random(5)
  assertEqual(3 + jitter, math_add(1, 2 + jitter))
If it was math_multiply(), then adding the jitter would fail - that would have to be multiplied in.

Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.

Randomness is useful if you expect your code to do the correct thing with some probability. You test lots of different samples and if they fail more than you expect then you should review the code. You wouldn't test dynamic random samples of add(x, y) because you wouldn't expect it to always return 3, but in this case it wouldn't hurt.

Are you joking? This is the kind of thing that leads to flaky tests. I was always counseled against the use of randomness in my tests, unless we're talking generative testing like quickcheck.

or, maybe, there is something hugely wrong with your code, review pipeline or tests if adding randomness to unit test values makes your tests flaky and this is a good way to find it

`today` is random.

If "today" were random, our universe would be pretty fricken weird.

It's dynamic, but it certainly isn't random, considering it follows a consistent sequence

Interesting, haven't heard this before (I don't know much about testing). Is this kind of like fuzzing?

I recently had race condition that made tests randomly fail because one test created "data_1" and another test also created "data_1".

- Test 1 -> set data_1 with value 1

- Test 1 -> `do some magic`

- Test 1 -> assert value 1 + magic = expected value

- Test 2 -> set data_1 with value 2

But this can fail if `do some magic` is slow and Test 2 starts before Test 1 asserts.

So I can either stop parallelism, but in real life parallelism exists, or ensure that each test as random id, just like it would happen in real life.