Why? Because 3 separate times, I predicted how the test should fail, and it failed differently! They failed in the unexpected way because I had either written the test setup incorrectly, or misunderstood a library method¹.
Had I just looked out for a failing test, I would have started writing code to make it pass, and been disappointed that it didn't pass when I was done.
--
¹ Turns out Java's String.indent(4) normalizes line endings, meaning it will add a line ending to the last line, even if it didn't have one before! Surprise!
@jitterted I have always thought the circle diagram for #TDD sent the wrong message to new practitioners. Refactoring should always start and end in green. If you refactor and things go red you’ve done something else other than refactoring.
Yes, sure, #Rust prevents a lot of bugs at compile time already, but not logic bugs.
For example in #CSVDiff we have ~70 unit tests and ~12 integration tests. The only "bug report" we have ever gotten was due to a corrupted CSV file (being mistaken with a bug in diff):
Today I realized I'm still paying for my old blog site (at TypePad) and I honestly don't know why. I suppose I should just scrape any content I care about and move it to my new site.
A lot of it was too old to be relevant or useful (hopefully nobody cares about ASP.NET MVC 2 any more). I saved copies of two #TDD related posts (one about "Testable Object Pattern" and one about me thinking "Design By Example" is a better name for TDD), and a 4-part series on async programming and the #dotnet TPL, which isn't necessary useful as-is but could be adapted into something.
Note that you can do all of these without actually doing "red green refactor." I very rarely write true "red-green-refactor" style #TDD, but my code often looks like this because I brought in the worldview, because I found the worldview desirable even when not doing strict TDD code.
I find that the red-green-refactor approach works especially well for code where you already know all of the tools you are working with and have strong established patterns in how to use those tools.
So while I don't think #TDD research studies prove anything, I think they can be useful to help me highlight benefits that might not be obvious from focusing on the TDD cycle itself.
"We found that Java TDD projects were relatively rare."
Yup. Especially in public GitHub repositories, because very little application development happens there (it's mostly tools and libraries). I point to my codebases as good examples because—while small—are real-world production apps.
"In addition, there were very few significant differences in any of the metrics we used to compare #TDD-like and non-TDD projects; therefore, our results do not reveal any observable benefits from using TDD."
And that's possibly "true", but completely discounts things like: how long does it take to figure out where to make the next code change; how confident are you that the change works AND doesn't break anything else; and (related) how long did it take you to implement the change.
I LOVE Hadestown. The characters raise a toast during the first act:
"To the world we dream of."
-pause-
"And"
-facing the audience-
"to the one we live in now."
> After 2 decades of doing TDD and BDD, I've realized that ignoring design and thinking that TDD will do a decent job at it for a trade-off for quicker/higher-quality development with the extra time was entirely a scam.
I react in two ways:
Yes, ignoring design seems risky and invites failure.
I don't know how to "do TDD and BDD" while ignoring design.
On the contrary, I do and teach #TDD as a mechanism for learning how to design "better".
I should clarify that the statement "do #TDD while ignoring design" is not entirely false, although it's at best misleading. My "Four Stages of TDD" model starts with Stage 1: test-first programming. In this stage, the programmer focuses on writing code that behaves correctly by writing tests first. Strictly speaking, one could call this "TDD while ignoring design", although I would call it "TDD while trying to ignore design". :)
Now I come back to my point: I claim that one cannot practise #TDD for very long without reaching Stage 2. In that sense, TDD inexorably leads the attentive programmer away from "ignoring the design". As they practise in Stage 2, they learn the limits of evolutionary design, notably which decisions they need to make up front and which ones they can defer. They develop confidence in their ability to refactor, which takes pressure away from "getting the design right the first time".
In Stage 2, they learn the value of refactoring over rewriting and this allows them to guide designs to evolve incrementally. When experienced practitioners talk about #TDD generally, they usually mean this stage. I remember in the early 2000s when we all talked about renaming TDD to "test-driven design" in order to hammer home this point.
The move to feedz.io is complete. Note that only new package builds will go here; if you're currently using the MyGet feed for prerelease packages, please ONLY update to feedz.io when you're ready to take newer dependencies.
I hate #daylightsavingstime. I hate it so much. The week where #DST changes occur lets me find the weirdest of higgs-bugson and mandelbugs in #GNOMECalendar while doing #QA.
At least the majority of those issues have already been durably fixed for #GNOME 46 by @danigm's fantastic #TDD (unit-tests-backed) bufixes 😌
When the software developer community comes up with a way of collaborating with the business to ensure solutions solve business problems, both business and developers have a tendency to downplay it as "a developer thing" , and once business involvement is removed from the equation, it gets reduced to a bureaucratic burden on developers until they drop it, too.
#TDD#BDD#DDD#Scrum: These are tools for alignment between business and development. They require business involvement.
It has absolutely been worth it modernizing the assertion library in @xunit in v3, and back-porting it to v2 has helped catch a lot of issues.
The number of issues has been larger than I would've expected given our test coverage. It just goes to show people write code in ways we didn't anticipate. Fixing bugs and filling in more tests has made the framework better, but it's still taking time away from v3.
Uh oh. Hit that moment with some code where I'm like:
Premise: It works.
Premise: I think I know how.
Conclusion: This should be a failing test.
Result: The test passes.
Wait. What?
Check your premises; one of them is most likely incorrect.
And now I make a note to deal with it later. It works in production—all the tests in the library pass. I don't need to update the library for any reason...but something's wrong—even if it's just my understanding of how it works.
"I often jump into the TDD flow when I’m adding a new feature to a product or confirming the existence of a bug. If it’s not clear how I should approach the problem, the best way for me to start is with a test. Tests force me to break down the problem into steps to reach an initial solution, while refactoring gets me to a cleaner solution"
➥ Increment Magazine
One top misconception about #TDD is that you should refactor the tests as you go. This means that you can delete some too. It’s like building an arch and then knocking out the supporting structure. The supports helped along the way but are no longer needed. It’s not true that “the best code is the code that was never written”. It is true that the best code modification is to delete it. Tests ensure that nothing unexpected happens once you have deleted that code. #programming
And well … It works good so far. And so many more things now make sense and work, e.g. TDD. I thought TDD doesn't make sense, except for some cases. But now with a different point of view to how to create and structure software … it now works. I could develop the current project fully #TDD.