So the idea with TDD is that you write the minimal amount of code that makes the tests pass, right?

And the tests passing are supposed to make me feel good?

Well then it doesn't work because I know my code is wrong and will cause issues further down the line and I can't not feel bad about it.

Also why does this cause me physical pain
@wolf480pl I guess then you're supposed to write tests for the things that you know you'll need or can go wrong later on if possible?
@schmorp writing tests for race conditions is difficult
@wolf480pl okay, I said nothing 😂

@wolf480pl

> ... the idea with TDD is that you write the minimal amount of code ...

In my experience, both the "test code" and "actual code" have to bent over backwards to allow for (unit) testing, especially when your "actual code" is side-effect heavy.

Anyone that had to `stubClock.advance(5, MINUTES)` can relate.

@michal I'm not writing unit tests. My actual code is a network server, so my tests interact with it over the network. Not sure if these count as end-to-end tests or more like integration tests... I'm trying to make them pretty exhaustive, testing each message type separately, with all the error handling and stuff, so that at the end I can throw away my actual code and end up with a protocol compliance testsuite.

@wolf480pl In that case a "spec check" is actually a nice thing to have, expecially if you anticipate multiple implementations. Makes it easy to judge how much of a feature set is available.

E.g. kaitai-struct has a set of checks [1] for different languages it's implementer in, for you (user) to see what features work and (language implementer) to see what needss fixing.

[1] https://github.com/kaitai-io/kaitai_struct_tests/tree/master/spec

kaitai_struct_tests/spec at master · kaitai-io/kaitai_struct_tests

Kaitai Struct: tests in all supported programming languages - kaitai_struct_tests/spec at master · kaitai-io/kaitai_struct_tests

GitHub
@michal hmm looks like they have separate tests for every language. As a library, it makes sense for them. But for me, having the tests interact with the tested code over the network is easier, this way it's only the test code that needs to bend over backwards :P

@wolf480pl It might actually be an advantage then. Have your "spec checker" receive a host:port argument and run a barrage of tests against it using the public API.

The next step would be exposing it as a public service, a-la https://federationtester.matrix.org/ :)

Matrix Federation Tester

@michal yup, that's the plan long-term.

After I finish prototyping and write down the spec.

@wolf480pl the idea is that you write the minimum amount to get the test passing, so that when you add more tests and more code you’ll know if you ever break anything functionality.
@hannibal_ad_portas yeah, but it meas I'm not supposed to fix code that passes tests, right?
But I'm not happy with that code.

@wolf480pl then write a failing test, and fix it.

If you know your code is brittle and wrong don’t leave it like that.

@wolf480pl @hannibal_ad_portas I've heard of red-green-refactor, is that part of TDD? I would take it to mean that you can refactor the code even after the test is passing.

@jvalleroy
Yes l, you’re not supposed to add new functionality without a failing test, but the tests you already have give you a safety net to refactor

@wolf480pl

@wolf480pl @hannibal_ad_portas Unless you write further tests that don't pass with the current code because of what you know is bad in the code?

But really, I don't think that there is anything in TTD that says you don't have to change some code that passes the tests, just that you don't have to add new features to it? (disclaimer: I never did any formal TTD training or anything)