Code reviews are things that I believe are very important to a software development team. Not just because they can reduce defects before QA or the customer sees them, but because they help promote awareness of the codebase in general and provide critical mentoring opportunities to the team. The main message of this book is that the earlier you can find a bug the cheaper it is it fix, and code reviews are an underrated way of accomplishing this. While I agree this is true, I actually believe the mentoring aspects of reviews are actually more important—teach a man to fish and all that.
While loaded with interesting analysis, one detail rang a bit untrue to me. Cohen considers all items found in a review to be defects, but this is not always true in reality. For instance, asking that a comment be added doesn’t mean the code in question is unintelligible, but sometimes simply an explanation of why something was done (for instance, an API class is not marked final
due to customer request). When mentoring, it is common to explain new techniques at review time as well; changing for(int i=0;i<longarray.length;++i)
to for(long x: longarray)
is not fixing or preventing a bug, but something that often happens as a result of a Java review. I believe this overly-broad interpretation inflates the numbers shown in the charts and tables a bit, but as it errs in favor of code reviews I find it easily excusable.
The author is the founder of Smart Bear Software, a company that not surprisingly has a product (Code Collaborator) that aims to ease the burden of doing code reviews. This certainly explains the focus on reducing the overall cost of bugs at the expense of mentoring—it is aimed at the people that control budgets, not the developers writing code. The self-published nature is also evident with something I expect an editor would have caught: the charts presented in landscape mode are turned on the wrong axis! He does reference Tufte, though, so I’ll assume there is some reason we have to spin the book counter-clockwise instead of clockwise.
All in all, this is a pretty interesting book. A lot of analysis and case studies went into it and a clear, convincing case in favor of code reviews is made. It skimps a bit on the mentoring aspects, but as those are probably much harder to quantify it is understandable. If you want to see a detailed look at why code reviews are good for a company, this should be your first stop.
It was only supposed to take an hour.
1 comment:
Response from the author:
You're right on with your points on mentoring -- both that it's one of the more important aspects of code review and that the book doesn't say much about it.
It's hard to measure the effects of mentoring, just as it's hard to measure the "cost" of finding a bug in the field.
I hear the "it's not a defect" argument a lot. A counter-argument is that even if the "defect" is something like "the code was hard to understand and needs more commentary" or even "the code isn't wrong but there's a better, more insightful way," it's still worth noting!
I think the confusion comes from the word "defect." Really it should be "finding" or something similar that connotes "something that we changed to improve the code and probably improve the author." This is much more positive and is more in line with how real, friendly code reviews go.
A compromise I like is to include a "defect severity" with only three choices: "Severe, Minor, Style." Or change the word "style," but the idea is that not all "defects" are "bad things."
But counting defects is still useful. It's like the parable of the man who is looking for his keys under a streetlight. His friend suggests that he probably lost his keys in a dark spot off to the side. The man agrees but points out that the light is better under the lamppost....
Finally, the most important thing about "defects" is that finding them is GOOD! Whether because it's a bug-fix or a learning experience, someone is growing and learning and the code is getting better.
Post a Comment