Adam Turoff on Wed, 26 Jan 2000 13:10:14 -0500 (EST) |
> On a more general note concerning XP, I find it difficult to understand how > one might go about coding by first generating a set of tests and then coding > until all tests work. Here, it's important to keep in mind that the author is a Smalltalk consultant and has been floating around the OOP community for a number of years. With that in mind, think about coding in a C++/Java/Smalltalk style where everything is an object and you're testing each method individually. With that mindset, it should be easier to test your code when it is properly factored[1] - every method does one "atomic" operation in your program. That is, it should be easier to test code that calculates splines, planetary orbits and gravitational interaction than testing a monolithic function like shortest_route_to_mars_from_earth(). > The concept is straightforward, but it seems to me > that the difficult part is knowing when you have the "right" set of tests. > Maybe it will become clearer once I read the chapter on testing strategy. To begin with, the "right" sets of tests handle common cases accurately, exceptional cases properly and border cases appropriately. This has obvious implications of you're writing test code for fopen() or sqrt(); it may be more difficult how to apply to real-world code, though. One interaction within XP is between testing and refactoring. Since you're constantly rewriting existing code on an XP project (using the pre-existing tests as a validation suite), you can look at tests that are difficult to write as a indication that your code/design needs to be refactored. Z. [1] This idea has been floating around for over 30 years, possibly dating back to Simula67. I came across a reference to properly factoring code as far back as 1970: http://www.forth.com/Content/History/History1.htm **Majordomo list services provided by PANIX <URL:http://www.panix.com>** **To Unsubscribe, send "unsubscribe phl" to majordomo@lists.pm.org**
|
|