Archive for October, 2017

Week 23 at an Unbootcamp

Sunday, October 15th, 2017

Best practices

As I noted last weekLearners Guild in Oakland works to instill in its Learners a variety of virtues, not just the ability to write code that computes correctly. The best practices include:

  • Stylistically consistent code (with respect to indentation, punctuation, capitalization, etc.)
  • Well-modularized code (each piece doing one thing)
  • Self-explanatory names of variables
  • Test-driven development (writing thorough tests before coding)
  • Parsimony (putting any code that is used multiple times into a named function rather than repeating the code)
  • Kind, helpful, and respectful conduct toward teammates

An important practice that supports the first of these is linting. You might think of this as an automatic process that reads all of your code and screams at you if you have failed to comply with the counterpart of the Chicago Manual of Style or the Blue Book. At Learners Guild, the linter most commonly recommended is named ESLint. Unlike some linters, it has only one opinion: You should be consistent. It doesn’t care which rules you obey, as long as you obey them all the time. You decide to turn rules on and off, and, if you turn them on, you can make them either fail your code or merely warn you. In this respect, linting with ESLint is distinct from using published style manuals, since manuals usually tell you which rules to obey. ESLint is basically a write-your-own-style-manual tool, which then enforces your rules for you.

Reaching out

During my just-ended week 23 at the Guild, my relationship with ESLint changed. Here’s the story.

I’m in phase 4, where Learners have stopped doing exercises and are actually helping build, improve, and repair real open-source software. I began, like everybody in phase-4, doing this with the software that manages the Guild’s own curriculum and monitors the Learners’ progress through it. I added missing functionality and repaired bugs that I found.

By mid-week I had reached a reasonable turning point. I had submitted 80 different changes to the Guild’s software and was waiting for them to be approved and incorporated into the running code (or criticized for flaws requiring correction). However, phase 4 was suffering from a temporary shortage of leadership, causing the work done by the Learners to pile up and leaving some of us without obvious ways to make more contributions.

This personnel imbalance was one manifestation of the flux that the Guild is now in, as it takes stock and embarks on an effort at cooperative (i.e. Learner–staff) redesign of the Guild itself. In response, some Learner time is being spent making proposals for Guild reorganization. Other than that, Learners are exercising plenty of de facto discretion in deciding how to progress as members of this learning community. Several Learners are busy learning (and teaching each other) technologies and frameworks that have not (yet) made it into the curriculum, such as blockchain development. I decided this week to shift my contributions from the Guild’s software to external open-source projects—something that is in the phase-4 description but was not currently being practiced.

Having made this decision in principle, I proceeded to survey the thousands of open-source projects that can use contributions from JavaScript developers. After a couple of days of research, I narrowed my list of candidates to 6 projects: Etherpad Lite, pdf.js, Sandstorm, Sequelize, Material Components for the Web, and ESLint. I ultimately chose ESLint. Thus, I decided to stop being merely a user of ESLint, and start also being one of its producers. (“Making my own dog food”.)

When you decide to contribute to such a project, you first follow the instructions for contributors. They tell you how to install a copy of the software on your own computer. That’s where you’ll do all your coding and testing, before you are ready to submit proposed changes.

I followed the instructions and got to the point of running the existing tests, to ensure that the software on my machine would behave as expected. I entered the command, npm test. About 2 minutes later, I got the results: 17,644 tests passing, 1 test failing.

I couldn’t start working on the code until the count of failing tests became 0. A single failing test is unacceptable. But hey, what an opportunity! Maybe I could figure out what went wrong with that test, and, if it was a problem with the test, I could correct it, thereby making my first contribution to ESLint. That is exactly what I did. The culprit was, indeed, a flaw in the test. The test made an assumption that I wouldn’t have a certain file in a certain directory—but I did. So I proposed a sequence of changes to deal with this problem, starting with instructions and warnings to contributors and more accurate error messages, and ending with a revised test that would not fail under the same condition.

When I began submitting proposals to change the documentation and code, project personnel began responding within 15 minutes. Proposals that were ready for adoption were merged into the running code within hours, or by the next day. Proposals needing revision elicited clearly explained change requests. And accepted contributions got expressions of gratitude.

What ESLint has, and Learners Guild (like the vast majority of open-source projects) is only beginning to approach, is a rock-solid system for managing volunteer labor. The instructions are exhaustive. The battery of tests is enormous. The response time to contributions is lightning-fast. And the decision makers are courteous, clear, competent, and thankful. A corollary benefit is personal validation: If you get contributions accepted and incorporated into ESLint, then you know (and any JavaScript developer knows) that you must be doing something right.

We Learners are now being asked to tell the Guild how to make itself better. One piece of advice I’m giving is: Make contributing to external projects a fundamental element of the curriculum, and help us learn to do it well.


Week 22 at an Unbootcamp

Saturday, October 14th, 2017

Making it work

For me, week 22 at Learners Guild in Oakland was the second week in phase 4. In that phase we aspiring software developers make use of our theoretical learning and programming exercises to work on real open-source applications that have real users.

In mid-week, I finally finished getting myself set up to work on the applications that Learners Guild itself uses to manage the learning process. Using those applications, Learners choose the modules they wish to study, check off the skills that they are acquiring, get paired up to work on joint projects, look up technical terminology, identify their fellow Learners and the projects they have worked on, and look up Guild policies and recommendations.

With that preparation liquidated, I started work on an as-yet unclaimed task: writing tests for a few database functions. The idea here is that there should be a batch of tests that can be performed on the software to verify that it is working as expected, and, before any change in the software is incorporated into the active version, those tests should all be run. If any of them fails, the change should not be activated.

My job was to write a few such tests for functions that keep track of the skills that Learners have acquired. In principle, those tests should have already existed. In fact, in line with the practice of test-driven development, embraced by the Guild, the tests should have been written before any code was written. They would all fail, of course. Once the coding began, one by one the tests would begin to pass. Eventually they would all pass. But this discipline had not been observed when the Guild’s staff hastily threw together the software for a new curriculum a year ago, nor when it revised that curriculum in June. Better late than never, phase-4 Learners were now creating tests retroactively.

In the course of developing tests for the management of Learners’ lists of skills, I discovered that this function had stopped working. Learners who marked skills found that those skills were no longer marked when they returned later to the page listing all the skills. So I set about figuring out why. The software is so complex that I didn’t feel at liberty to learn how it works in elegant detail before diagnosing this bug. It was a process of poking the software to see how it would respond and investigating any hypotheses that came to mind. Within a few hours I had discovered the reason for the bug: Somebody had changed the name of a column in the table of Learners’ skills, but neglected to make the same change in the function that recorded changes in those skills. Every attempt to record a change with that function (silently) failed because of the misnamed column. So I corrected that defect in my own local copy of the application and verified that the application was now recording skill-repertoire changes. That done, I submitted the correction for review and approval and then proceeded to develop the tests. Had my new tests existed in the beginning, that bug would have been caught as soon as the column was renamed.

The Guild knows that a rigorous testing discipline is a “best practice” in software development, but, like many small startup firms and many individuals, it doesn’t always comply with the principles that it espouses. So, are we learning from its example to be casual about testing? Or are we learning to do rigorous testing? In my own case, at least, I perceive lapses in the Guild’s own software discipline as negative examples to be avoided. They seem to be hardening my resolve to do things right. “Do what I say, not what I do.”


When the quake starts, duck or run?

Sunday, October 1st, 2017

How can you maximize your chances if you are indoors when a major earthquake starts? The conventional advice given by most experts to residents of the United States is: Stay inside and crouch under a sturdy piece of furniture.

Residents of Berkeley Town House, a senior housing cooperative, got that advice from a highly qualified seismic engineer, David Ojala, during a March 2017 briefing that I organized. Ojala explained the basis of his opinion as a balance between risks. Residents could be killed or injured trying to flee the building, or, if the building were to collapse, by staying inside, but he estimated the risk from flight as substantially greater.

Until today, I had the impression that this opinion is consensual in the entire fraternity of expert seismic engineers.

Not any more. Now I’m aware that it is at least somewhat controversial when applied to buildings of the Berkeley Town House type, namely older non-ductile (i.e. stiff) concrete buildings. The dissenting opinion that I encountered today is that of H. Kit Miyamoto, set forth in his 27 September dispatch from a tour of earthquake damage in Mexico City. Here is the pertinent passage:

Old concrete structures are one of the most dangerous building types that exist on earth. Their inadequate reinforcing details make the concrete very brittle under seismic motion. All damaged buildings we saw today were this type. You know how we teach people to duck under a desk during an earthquake? We teach this in California. But if I’m in a nonductile concrete structure, I will run as fast as I can outside and so should everybody.

Miyamoto goes on to explain that this advice would not turn out to be right in every situation:

This earthquake was also different from the 1985 one. The Mexico City earthquake in 1985 was a long-distance earthquake with long, swaying motion. This affected taller structures, six stories and higher. But this earthquake had a much sharper and faster shaking motion, which impacted three- to six-story structures more. It’s what we called “resonance effect.” Every earthquake is different and affects different types of buildings. So, one cannot say that a building is safe if it went through one earthquake. Next one can be totally different.

So, unless you know in advance about the characteristics of the next quake, if Miyamoto is right then you can’t be sure whether it is better to duck or to run until it is too late to make the choice. But at least one thing is clear: You can’t do what all the experts recommend, because they don’t.