How do you rate on the Joel Test?

Joel Spolsky laid out the Joel Test back in 2000 as a highly irresponsible, sloppy test to rate the quality of a software team as he calls it himself. Being a MicroISV, I didn’t expect us to rate that high on the test as some of the “rules” are obviously more appropriate for bigger teams. But see below for our rating.

1. Do you use source control?
We have been using a source control system from day one and wouldn’t develop even the most simple program without it. It’s just so convenient to have backups of every single version we ever checked into the source control system. Another important feature for us are branches. When we have to fix a bug in the currently released version and when we made changes to our code that we don’t want to ship right now, we create a new branch and fix the bug in it. We use Subversion as our source control tool and although it could be faster, it’s the best source control system I ever used.

2. Can you make a build in one step?
As Tobias outlined in the Explaining our build system blog posting, we use Cygwin and Makefiles for our build setup. We can make a build in one single step by executing the main Makefile. The scripts do everything from getting the sources, building the applications, help files, libraries, setups and so on. It has been a great timesaver for us.

3. Do you make daily builds?
We don’t have scheduled daily builds as it doesn’t really make sense for us at this stage. We start builds by hand when we check in some bigger changes into the source control system. But since we have days without a single source code change, we don’t need to have the build machine running 24/7. Besides, breaking the build is very rare as we are just 2 developers and the chances of breaking something is much smaller.

4. Do you have a bug database?
We use Trac as our bug database (and wiki). So far it’s been okay but it could be better. One thing I’m missing in Trac is a hierarchical overview of projects, bugs and feature requests (we had that in our home-grown bug database). Since our home-grown bug database was desktop based and we needed something web-enabled, we switched to Trac. We haven’t switched to something else because Trac has such a great Subversion integration and a built-in wiki and we haven’t found a good alternative for it yet.

5. Do you fix bugs before writing new code?
Most of the times we do fix bugs before writing new code. But there are some situations where this is not true. Take Windows Vista, for example. We tested SmartInspect on Vista and fixed most of the problems immediately. For some problems we decided to wait for later RC versions or the final Vista version as we aren’t sure if the problems are bugs of Vista or of SmartInspect. So in general I would say that we adhere to this rule but I don’t think this rule should be enforced strictly for every case.

6. Do you have an up-to-date schedule?
We do have an up-to-date schedule and are very strict about keeping SmartInspect in a ready-to-ship state. So if we have to release a new version of SmartInspect, we can do so pretty quickly. We set us milestones with planned features and release dates and try to stay close to the schedule. We have been pretty bad about this with SmartInspect 1.0, but improved considerable since we released the first version.

7. Do you have a spec?
Strictly speaking, we have to say ‘no’ to this one. We do talk a lot about planned features, their implementation and the best design. We also write down use cases and user stories when we plan a new release. But we don’t write 50-page documents with detailed discussions about every single feature that we want to put into a new release and I doubt that it would be worth it. We do write small specs about planned future products when we find a good new idea and want to preserve it for the future, but we don’t have specs for new versions of SmartInspect.

8. Do programmers have quiet working conditions?
I’m happy to say that this one is true for us. As we are just two developers and both of us have a quiet private office, it’s pretty easy to confirm this.

9. Do you use the best tools money can buy?
I would say ‘yes’ on this one. We have multiple monitors, good desks and good chairs, all the software that we need and up-to-date machines. We joined the Empower Program, for example, to get all the operating systems for testing and production usage that we need. We also have other not-so-cheap tools that make it easier for us to generate the SmartInspect library documentation which saves us a lot of time.

10. Do you have testers?
We obviously don’t have any dedicated testers but we compensate this by having a huge number of automated test cases for the SmartInspect libraries and comprehensive test plans. We execute the test cases and test plans on a regular basis and keep the bug count surprisingly low (our customers confirm this).

11. Do new candidates write code during their interview?
As we haven’t interviewed any candidates, this one doesn’t really apply to us. I guess this one doesn’t apply to most of the MicroISVs out there and is more appropriate for bigger teams.

12. Do you do hallway usability testing?
We did quick usability test of SmartInspect in the past and plan on doing it in the future. It has helped us a lot and we changed some of the features and changed default settings after seeing people having problems with the GUI. One of the things that the usability testing made us realize is that many people have problems with docking windows. That’s why we disabled docking in the SmartInspect Console by default and added a Reset Docking feature to undo any docking with one click.

All in all we rate 8 out of 12 on Joel’s Test with 2-3 rules not really appropriate for a very small ISV like us. That’s not too bad I guess and we are pretty happy about our development system. How do you rate on the test with your MicroISV?