Making it work
For me, week 22 at Learners Guild in Oakland was the second week in phase 4. In that phase we aspiring software developers make use of our theoretical learning and programming exercises to work on real open-source applications that have real users.
In mid-week, I finally finished getting myself set up to work on the applications that Learners Guild itself uses to manage the learning process. Using those applications, Learners choose the modules they wish to study, check off the skills that they are acquiring, get paired up to work on joint projects, look up technical terminology, identify their fellow Learners and the projects they have worked on, and look up Guild policies and recommendations.
With that preparation liquidated, I started work on an as-yet unclaimed task: writing tests for a few database functions. The idea here is that there should be a batch of tests that can be performed on the software to verify that it is working as expected, and, before any change in the software is incorporated into the active version, those tests should all be run. If any of them fails, the change should not be activated.
My job was to write a few such tests for functions that keep track of the skills that Learners have acquired. In principle, those tests should have already existed. In fact, in line with the practice of test-driven development, embraced by the Guild, the tests should have been written before any code was written. They would all fail, of course. Once the coding began, one by one the tests would begin to pass. Eventually they would all pass. But this discipline had not been observed when the Guild’s staff hastily threw together the software for a new curriculum a year ago, nor when it revised that curriculum in June. Better late than never, phase-4 Learners were now creating tests retroactively.
In the course of developing tests for the management of Learners’ lists of skills, I discovered that this function had stopped working. Learners who marked skills found that those skills were no longer marked when they returned later to the page listing all the skills. So I set about figuring out why. The software is so complex that I didn’t feel at liberty to learn how it works in elegant detail before diagnosing this bug. It was a process of poking the software to see how it would respond and investigating any hypotheses that came to mind. Within a few hours I had discovered the reason for the bug: Somebody had changed the name of a column in the table of Learners’ skills, but neglected to make the same change in the function that recorded changes in those skills. Every attempt to record a change with that function (silently) failed because of the misnamed column. So I corrected that defect in my own local copy of the application and verified that the application was now recording skill-repertoire changes. That done, I submitted the correction for review and approval and then proceeded to develop the tests. Had my new tests existed in the beginning, that bug would have been caught as soon as the column was renamed.
The Guild knows that a rigorous testing discipline is a “best practice” in software development, but, like many small startup firms and many individuals, it doesn’t always comply with the principles that it espouses. So, are we learning from its example to be casual about testing? Or are we learning to do rigorous testing? In my own case, at least, I perceive lapses in the Guild’s own software discipline as negative examples to be avoided. They seem to be hardening my resolve to do things right. “Do what I say, not what I do.”