Quantcast
Channel: Software Trading
Viewing all articles
Browse latest Browse all 18

Do code reviews actually work? Honk!

$
0
0

 

While code reviews have become so common, that you can almost call them common sense, agile afficionados promote them as a way of increasing feedback loops. This of course improves quality, which lowers total cost of ownership, and makes everyone sing happy songs like Kumbaya.

Having already benefitted from code reviews over the last few years, I decided to reach for the research to see if there was anything useful which could help add to team effectiveness. Turns out, there probably is, especially in terms of catching and preventing bugs, before the software goes into production.

So far, the three biggest wins I’ve personally experienced from code reviews include:

1. catching bugs

2. discussing and suggesting coding standards

3. learning from one another, particularly with respect to finding an optimal solution of a problem

There is a strong component of teamwork and knowledge transfer during an informal code review which doesn’t provide immediate benefits for your friendly neighborhood bean counter, but should increase team cohesiveness and effectiveness over the long run. Good code reviews are like peer-to-peer mentoring. As a result of the discussion, you understand the problem, the approach used, and also learn whether you or anyone else on the team could solve the problem more elegantly. Moreover, everyone learns something about that part of the code. If they ever need to modify it in the future, they will understand the hidden intangibles: the context of the code, the assumptions made by the developer, and the business need it is meant to solve.

I had a look at Best Kept Secrets of Peer Code Review by Jason Cohen (not of the Coen brothers, but of SmartBear software) and the chapter he contributed to Making Software by Andy Oram and Greg Wilson. Here are a few nuggets I gleaned from their review of the academic literature:

1. The amount of time spent on code reviews significantly influences the amount of bugs caught, but

2. The effectiveness (rate of catching bugs) drops off significantly after an hour.

3. Checklists can be useful, e.g. class formatting and style, supposedly finding more bugs than systematic reviews and discussions of edge cases – although this sounds rather stiff for an agile environment. Also I suspect that they find less important bugs, things like not using const & for passing around objects in C++, rather than problems with the logic of an algorithm.

4. Reading others’ code can be more effective than meetings at finding bugs, about 50% according to one study, although you lose out on the interpersonal communication which is the whole point of agile (communication saturation)

5. More than one extra person doing a review doesn’t add too much to bug discovery rates (about 4%). I suspect this cut-off would be higher if the code review turns into an informal design discussion, but that also would benefit from not having too many team members involved .

6. Reviewing more that 300 lines of code/hours is correlated with significantly less bug finding. Of course, this depends on the type of code you are reviewing.

7. Checking your own code, say a week after you’ve looked at it, will also offer benefits. You catch about half as many bugs this way. Having another developer look at the code provides roughly double the effectiveness-in terms of bug squashing.

There is a good amount more detail in the sources, so I recommend you take a look at them. Coen works for a vendor of code reviewing software, so he may have a conflict of interest in his writing, but I suspect that even the advice that helps promote the sales of their software (the use of a tool for remote code reviews) is actually true.

It’s clear that code reviews can give a lot of value if you only measure catching bugs, which outweighs the cost of the extra developer time. In addition there are other benefits, particularly in an agile environment.

There is also a thin line between frequent code reviews and pair programming. We do a lot of the former, particularly informal reviews, but generally don’t pair up that often-more of a team preference than anything else, really. Good luck on your next bug hunt!


Filed under: Book Review, Scrum, Software, XP Tagged: Code review, Greg Wilson, Jason Cohen, Programming, SmartBear

Viewing all articles
Browse latest Browse all 18

Trending Articles