Early in the development of Savant (back when it was HTML_Template_Dummy) I broke the assign() method without knowing it, then distributed the source to early adopter testers. Of course, they discovered the break right away. Embarrassed, I wrote up a quick series of "visual" test scripts to run on each release. They are not automated; basically, they instantiate Savant and print out the results of various method calls, which I then eyeball to look for problems. While not optimal, and certianly not "best practice," it's good enough most of the time.
However, such "eyeball" tests seem to have an unexpected benefit. I just got a comment from Alex at Sourcelibre saying:
In version 2.3.2, the directory ... Savant2/tests are really usefull. I almost always prefer examples to explanations, and these examples are just perfect.
Well look at that. I wrote up code examples and I didn't even know it. While it's not documentation per se, it appears to add a lot of value to the package.
So now there's at least one more reason to write non-automated tests for your libraries: if the tests are designed to be human readable, not machine readable, then they can serve the purpose of testing **and** tutorial.