Rating:  Summary: Not as good as I hoped Review: Basic concept of book is that 'quality cannot be inspected in, it must be build in'. I did come away with some new ideas and some interesting points. However, I frequently found the book to be off the main theme.
Rating:  Summary: Inroads: A reality in software. Review: Book has lots of templates which are easy to use. The book is extremely practical and a "MUST" have for every software engineer.
Rating:  Summary: Inroads: A reality in software. Review: Book has lots of templates which are easy to use. The book is extremely practical and a "MUST" have for every software engineer.
Rating:  Summary: Inroads: A reality in software. Review: Every time you buy a book that relates to quality you find the same ideas, interesting but how to carry them out? This book will help you to build a quality system that fits your organization. Good exaplantion about use of checklists, testing, and a new software quality paradigm.
Rating:  Summary: New and practical view of software quality Review: Every time you buy a book that relates to quality you find the same ideas, interesting but how to carry them out? This book will help you to build a quality system that fits your organization. Good exaplantion about use of checklists, testing, and a new software quality paradigm.
Rating:  Summary: Book Feedback Review: Excellent book with lots of templates and easy to understand text. Authors have done a great job.
Rating:  Summary: New Paradigm Review: In our earlier email, due to the space limitations, we could not explain everything. Here is additional information on our "promise of a new paradigm in software quality assurance," the paradigm provides: [1] A model or pattern for a product delivery process, with an extremely effective feedback mechanism. [2] A set of programming and software architectural quality standards to which the product must conform, but they are extremely sensitive to "technological failures." Thus, there is no discussion or arguments as to what to include and what not to include on the part of team participants. {We chose to define our quality standards in terms of their impact on the new paradigm, rather than parrot back the IEEE standards, which we feel only partially fit.} [3] The ability to have both the software product and its quality measures conform to "zero defect" standards is also important. Because the filters "learn" while they are being used, they gradually evolve into more and more effective tools. They are not static, nor is the process. The goal is "continuous process improvement," not "perfection." As technology changes, along with customer expectations, the process must readily adapt. But in order for an error to get to the customer, it must have passed through 18+ broken filters! [4] Because of the small number of patterns which are "correct," it is possible to test indirectly through "pattern matching" and "certification" that the code is correct. This has been in place at DT for over 3 years. These patterns are further "constrained" by adding "state invariance" and "dependency elimination" to the standards. This tends to eliminate the need for traditional Unit Testing and Functional Testing. Deming stated, as we quoted in the book: "Cease dependence on inspection to determine quality, rather design it in up front." In testing, we took this to mean, "If you know something can be an error, design it out rather than relying on testing or inspections to find it later." At DT, this means that all software is "certified" by 3+ programmers that the code is correct. All other testing is "behavioral" and is related to true measures of software quality, as we defined them in the book. [5] Traditional testing begins by stating: "Given the existing code, how do we test it to find errors (or defects or whatever)?" Mostly this "indirect testing" is related to testing with a given level of "coverage" of the code itself, paths through the code, etc. It also relates to testing the functionality on the interfaces. Once you model the "states" the system can take on, you find that most of the time, there are millions of potential states, and it is impossible to determine how to prove that each one works. This led us to minimize the number of states, mainly through removing dependencies and enforcing "state invariance." Additionally, we minimized the number of states generated at "input points," whether they came from hardware, ancillary software, third party software, user inputs, or whatever. Our major emphasis was eliminating a large number of potential "states" in the user profiles. [6] By re-defining our interfaces, even in legacy code, to reflect User Profiles, several things can be accomplished: (1) You can eliminate the "push and pop syndrome," i.e., having the removal of a defect cause the system to break at one or more additional points. In the book, we discuss the problems and potential impact dependencies have on Regression Testing-and how they can be eliminated. (2) You can minimize the number of things a user can do, noting that wherever there is a dependency in the user interface, there is the potential for creating an exceedingly large number of states, and these dependencies tend to be very error-prone to a user. Further, how do you train a user if there are 10157 ways they can use the system? (3) You can enumerate a small set of "valid" user profiles, i.e., those which you are able to test and prove correct. Although this does not "protect" the user, it allows you to minimize the states to those you can test and prove to be correct. This allows you to "stabilize" legacy software containing a large number of undocumented (and untested) states. (4) By re-defining "operational profiles" from Software Reliability Engineering so that they are defined in terms of a finite set of user profiles (as opposed to a probability distribution based on functional usage during production), it is possible to show "infinite mean time to failure." You can see that while most of the tools used have been around for a long time, they are being used in a completely different context, which is why we have called this a new paradigm. Evaluations of the value of the book: [1] It was reviewed favorably in the November Issue of Computing Reviews. [2] It is being used as a text in several universities in the University of California system. [3] It is being used in at least one Junior College in the Washington, D.C. area. [4] It has been highlighted in a special issue of Computer World on software quality. [5] The ideas in the book have been proven to work at Digital Technology International, Intel, and other software companies. {Intel has won awards for the high quality of software built under the new paradigm.} [6] Last fall, when DT qualified for ISO-9000 registration, all three auditors had copies of the book, which we used as our Quality Manual, along with other filters and processes in place at DT. While they found areas where improvement was needed, they also complemented us on our process. One of the auditors also works as a consultant to prepare companies for SEI CMM certification. The comment was, "You are only a short distance from showing conformance to SEI CMM level 5!" ISO-9000 usually only evaluates companies around a CMM level 2 or 3. All three auditors are working with Vern to put out a paper on how well this paradigm fits with ISO and CMM. The CMM expert wants to work with DT, as do several of the SEI-qualified auditors she works with. It is difficult to demonstrate "continuous process improvement" and its measurement and management as it is needed to conform to CM level 5, which is why so few companies achieve level 5. Our paradigm demonstrates this conformance quickly, accurately and provides for extremely tight control of the process and its improvement-while still focusing almost entirely on ensuring quality at each stage of the process. Error rates and similar metrics have less meaning in this context. But the process can be efficiently managed on a daily or weekly basis and audited as frequently as you desire. [7] A 3-part paper describing this Deming-oriented Product Delivery Process has been published in the Journal of the Quality Assurance Institute(October, 1997, January, 1998, and April, 1998). Dr. Bill Perry, of QAI, kindly wrote the forward to the book. [8] A half-day tutorial will be presented on the new paradigm at the 15th International Conference on Testing Computer Software, in Washington, D.C. in June, 1998. There will also be a paper on "Protecting Yourself from Instabilities in Third Party Software," based on the new paradigm. [9] The book was into its second printing within about 6 months of its publication date, with most of the interest generated through word of mouth. One last comment on Mr. Meyn's evaluation: [1] Mr. Meyn questions the appropriateness of the comment on OO. In terms of the quality measures we defined in terms of independence and re-use, OO violates many of the quality measures we suggest be enforced. Interestingly, the Air Force Version of ADA and the new JAVA language, while based on OO concepts, allow for easy elimination of the dependency issue. This is only a small part of what is covered in the book.
Rating:  Summary: Good insights on the New Paradigm. Detailed checklists and Review: This book originally attracted me for the checklists that are contained in the appendix and for its promise of a new paradigm in software quality assurance. However while reading this book - and I tend to be thorough - I became thoroughly dismayed. First, this book has all the appearances of not having been reviewed. Basic author craftmanship is not evident. Instances of bad style are common. In many places it becomes somewhat incoherent - sometimes to the point that I could not understand what was intended to be conveyed. With the exception of the checklists the book's contents fail to live up to expectations. Sometimes I got the impression, that chapters had been written to make up the page numbers. And I was left wondering whether the author really understood what they were writing. Take for instance the following quote, which is part of a critique on OO technologies: "Inheritance always causes dependencies! These can be eliminated through fancy footwork, but the question is: 'Why Bother?'" (You might also ask, what a critique of OO technologies has to do with a book on SW quality, but the authors display a tendency to editorialize). In other areas the book simply does not deliver at all. Here are some examples: For instance in the chapter "Techniques For Process Assurance" under the heading "Project Team"(?) the authors provide 7 lines on how important good team selection is but fail to provide any references on how to create them (such as Lister and DeMarco's Peopleware). A project team has nothing to do with techniques and the authors would have done better to remove the topic rather than try to cover such a complex area in 7 lines. Likewise the chapter "Software Quality Assurance Reviews" sounds like a copy of the IEEE standards. But no information is given how to make these reviews actually work. The entries in the bibliographic reference section give the impression of not having been carefully selected. The above mentioned area on reviews and inspections fails to mention Gilb's book on inspections and only refers to a publication by Fagan on this topic. In two of the appendices, several pages of text are repeated word for word. The proof reader must have fallen asleep. The authors proclaim their product delivery process is a 'new paradigm.' After having read the book I cannot see a new paradigm (apart from the misuse of the term). What is new in checklists? - many companies have them because they are very effective. What is new in market oriented reviews? My overall impression is that the authors have a good collection of checklists, that they wanted to turn into a book. It appears to me that they then added, seemingly at random, more information to it to make up the volumne. The result backfires badly, because it turned a decent nucleus into a book that I find not worth buying. In fact, this is the first time in my life I have returned a book to the vendor.
Rating:  Summary: Limited value for money Review: This book originally attracted me for the checklists that are contained in the appendix and for its promise of a new paradigm in software quality assurance. However while reading this book - and I tend to be thorough - I became thoroughly dismayed. First, this book has all the appearances of not having been reviewed. Basic author craftmanship is not evident. Instances of bad style are common. In many places it becomes somewhat incoherent - sometimes to the point that I could not understand what was intended to be conveyed. With the exception of the checklists the book's contents fail to live up to expectations. Sometimes I got the impression, that chapters had been written to make up the page numbers. And I was left wondering whether the author really understood what they were writing. Take for instance the following quote, which is part of a critique on OO technologies: "Inheritance always causes dependencies! These can be eliminated through fancy footwork, but the question is: 'Why Bother?'" (You might also ask, what a critique of OO technologies has to do with a book on SW quality, but the authors display a tendency to editorialize). In other areas the book simply does not deliver at all. Here are some examples: For instance in the chapter "Techniques For Process Assurance" under the heading "Project Team"(?) the authors provide 7 lines on how important good team selection is but fail to provide any references on how to create them (such as Lister and DeMarco's Peopleware). A project team has nothing to do with techniques and the authors would have done better to remove the topic rather than try to cover such a complex area in 7 lines. Likewise the chapter "Software Quality Assurance Reviews" sounds like a copy of the IEEE standards. But no information is given how to make these reviews actually work. The entries in the bibliographic reference section give the impression of not having been carefully selected. The above mentioned area on reviews and inspections fails to mention Gilb's book on inspections and only refers to a publication by Fagan on this topic. In two of the appendices, several pages of text are repeated word for word. The proof reader must have fallen asleep. The authors proclaim their product delivery process is a 'new paradigm.' After having read the book I cannot see a new paradigm (apart from the misuse of the term). What is new in checklists? - many companies have them because they are very effective. What is new in market oriented reviews? My overall impression is that the authors have a good collection of checklists, that they wanted to turn into a book. It appears to me that they then added, seemingly at random, more information to it to make up the volumne. The result backfires badly, because it turned a decent nucleus into a book that I find not worth buying. In fact, this is the first time in my life I have returned a book to the vendor.
Rating:  Summary: Good insights on the New Paradigm. Detailed checklists and Review: This is a must book for those who continuously find the same defects from release to release. The New Paradigm gives the readers the concepts and implementation of "filters" that prevents defects from going further in the development cycle. This book is not about software testing or reviews but about Software Quality Assurance . It has overall concepts on the product assurance and process assurance activities that increases the robustness of a product. There are several books in the market on the subjects of inspections and testing. After reviewing several books, I found this is the only book that has taken the Deming principle of "defect containment" and shown the readers the effectiveness of the processes, if implemented correctly.
|