<< 1 >>
Rating:  Summary: Look elsewhere Review: Ron Thisted's book on computing algorithms for statisticians was one of the most useful and clearly written texts on the topic. There have also been a few other good ones. Lange brings to the table a more current book that deals with the key new methods such as resampling, Markov chain Monte Carlo, Fourier series and wavelets,the EM algorithm and extensions of it. He also includes useful but uncommon results for power series, exponentiating matrices and continued fraction expansions.The usual matrix algebra stuff for linear models is also there. You will also find a chapter on nonlinear equations and a chapter on splines. There are asymptotic expansions in Chapter 4 and Edgeworth expansions in Chapter 17. Almost everything that is important in statistical computing today is included. This book can be used as for a graduate course in statistical computing and is a valuable reference for any statistical researcher.
Rating:  Summary: best stat computing book since Thisted Review: Ron Thisted's book on computing algorithms for statisticians was one of the most useful and clearly written texts on the topic. There have also been a few other good ones. Lange brings to the table a more current book that deals with the key new methods such as resampling, Markov chain Monte Carlo, Fourier series and wavelets,the EM algorithm and extensions of it. He also includes useful but uncommon results for power series, exponentiating matrices and continued fraction expansions. The usual matrix algebra stuff for linear models is also there. You will also find a chapter on nonlinear equations and a chapter on splines. There are asymptotic expansions in Chapter 4 and Edgeworth expansions in Chapter 17. Almost everything that is important in statistical computing today is included. This book can be used as for a graduate course in statistical computing and is a valuable reference for any statistical researcher.
Rating:  Summary: Look elsewhere Review: The author states in the introduction "My focus on principles of numerical analysis is intended to equip students to craft their own software and to understand the advantages and disadvantages of different numerical methods". Lets look at a few topics to see whether these lofty goals were achieved. Least-squares calculations: The chapter on linear regression is nine pages. The largest section is on the sweep operator (the problems with the sweep are not mentioned). Solving least squares is thru the normal equations only (which numerical analysts agree is the least stable of the "big three" methods for solving least squares problems). There is a page on woodbury's formula for determinants. Who uses that!? So many problems in statistics eventually boil down to a least-squares calculation. This book has almost nothing useful to say about this problem. How can students "craft their own software" after reading this book? They simply can't. Look elsewhere. Eigenvalues: The chapter on eigenvalues is eight pages and covers only Jacobi's and the Rayleigh quotient, nothing on the QR, nothing on bidiagonalization. The nine pages would have been better used for soemthing else. Bootstrap calculations: I decided to check out section 22.5, "importance sampling". After a so-so 2-page inroduction we get an example. Example 22.5.1 uses the "Hormone Patch Data" from Efron and Tibshirani's Bootstrap book (a wonderful book, by the way). First, the analysis is botched, the numerator and denominator variables were interchanged (relative to Efron and Tibshirani). Now, the denominator has postive probability of being zero, which is not a problem in of itself. Then there is a graph based on 100,000 bootstrap samples. The book says: "Clearly, importance sampling converges more quickly". Figure 22.1 shows that it actually didn't converge at all!. Then do we really need importance sampling for this problem? The whole exact bootstrap distribution has 8^8=16.7 million points (at most). It took just one minute to write and run a program that computed the exact tail probability. Why the hell do I need 100,000 bootstrap samples to approximate something I can compute exaclty with less work? What can students actually learn from this? I can go on and on and on, but I'll stop here. What is good about this book? It does occasionally explain nicely the math behind certain methods, but even then it really doesn't integrate ideas well enough for a student.
<< 1 >>
|