The Value of Coding Your Book with MATLAB with Nicholas O’Donoughue

How MATLAB helped me improve the accuracy and value of my book…

When I set out to write my book Emitter Detection and Geolocation for Electronic Warfare, I decided to make MATLAB an integral part of the writing process.  Not only was this part of the proposed value of the book (that we would provide the reader with MATLAB code), but I knew that it would enforce rigor and accuracy on my part.  It is one thing to carefully derive and check equations and performance predictions, but it is quite another to test those against carefully constructed test cases and ensure that the performance predictions are reasonably accurate.

This process, although time consuming, saved me from numerous errors.  In Chapter 7 of my text, I describe several classical direction finding receivers, and derive performance predictions for each one in a common formulation, as shown above.  This allows direct comparison of Watson Watt, Adcock, Beamscanning, Doppler, and Interferometric receivers.  While much of this had been derived before, my efforts to present them in a common set of parameters led to some transcription errors when I re-derived results, most frequently radians to degrees errors.  Only through Monte Carlo simulations of each receiver, as shown below for the Watson Watt DF receiver, was I able to identify all of the errors and ensure that my derivations were accurate.

Elsewhere in the book, we present a number of deterministic and iterative algorithms for geolocation.  Through a similar process, we were able to verify that our iterative algorithms converged, and fix errors that were identified when convergence was observed to be fragile (if the random seed changed, the iterative solutions occasionally failed catastrophically; which uncovered a numerical error in how covariance matrices were being inverted).  Once again, testing our results with Monte Carlo trials in order to generate graphics for the textbook helped to ensure that the algorithms presented were correct.

Accuracy wasn’t the only benefit, although it was the most important to me.  Because every figure was generated by a script, and I did all of the formatting (font sizes and line weights) programmatically, I was able to quickly regenerate every figure.  This came in handy when editors complained about difficult to read figures, or when the copy editor wanted to enforce a change in my text labels or fonts.  Had I generated the figures in a default format and then manually modified them for the text, this process would have been onerous.  As it was, the fixes were simply a matter of tuning the appropriate parameter, and then running a simple script to regenerate all of the figures.

As a bonus, because every figure was generated programmatically in MATLAB, we have chosen to provide all of that code to the readers, in addition to the utilities that I wrote to implement each of the algorithms described.  This will make it easier for readers to test the code and compare their results to those in the textbook.

Using MATLAB to test all of my results, and generate every figure, was certainly time consuming, particularly in the first few chapters, when I was still setting up the formatting preferences, but it ultimately paid immense dividends.  I recommend that all technical authors use some coding language, whether it is MATLAB, Python, R, Java, or C++ (among countless others).  What matters most is that the results be tested (to the degree that is possible), and that the figures be easy to alter and regenerate.