Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The people who believe with test-everything is probably the same people who believe with 100% code-coverage.

I think most people, by now, have learned that 100% code-coverage and test-everything are superfluous so there's no point of discussing these two or making a big deal of these two as a problem of subscribing to TDD.

The idea of TDD is to test the very minimum such that the code is proven to work as per requirement. When there is a bug found, write the test first before you fix the bug. This way, at one point of the life of the software, you'll eventually have enough tests to cover. I think most people put too much focus on the development story rather than the maintenance story thus most people only explain how to do TDD on new code, not how to do TDD on existing code (or rather, the next phase).

I've been in projects where because the people behind them were not putting too much effort for testing, they start automation effort from behind. Eventually you'll hit a chicken-n-egg situation: we'd like to refactor this buggy part but the architecture makes it hard to write automation test.

All professional projects will have automation tests at some point of their life. People by now should already know that software grows and hiring more QAs, re-test everything (regression, smoke, full-blown, etc), or even telling devs to test the code they just wrote manually don't scale.

Keep in mind that sometime, quality is defined by the client (or by the requirements). The client might not ask superb quality (as long as there is no data corruption) thus one probably does not have to write extensive automation-tests.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: