Home :: Books :: Computers & Internet  

Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet

Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical
Reference
Religion & Spirituality
Romance
Science
Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
J2EE Performance Testing

J2EE Performance Testing

List Price: $49.99
Your Price: $33.99
Product Info Reviews

<< 1 >>

Rating: 5 stars
Summary: Excellent Read
Review: I was looking for a book to help me understand how regular performance testing is performed so that I could construct a plan to do Denial of Service security testing. I knew nothing about performance testing at all. The books really well laid out, structured, has great examples and is really methodical. It was perfect !

Rating: 4 stars
Summary: A good introduction
Review: In the last decade, the performance of J2EE applications has become of monumental importance in enterprise industries that use these applications. With the complexity of J2EE applications increasing every year, it is crucial that users of these applications be presented with a level of performance that is acceptable to them, this performance usually codified in the ubiquitous "response time." The authors of this book have given a good introduction of how to deal with performance issues in WebLogic applications and have discussed a freely available tool, called Grinder, which allows load-generating and data collection. The book though can be read with respect to any load-generating tool, such as Mercury LoadRunner, etc. Even though Grinder is free, it may take time for enterprise users to trust it in testing and modeling.

After a brief introduction to what the book is all about, the authors begin in chapter 1 with discussion on a testing methodology for doing performance studies of J2EE applications, which they hope will be generic enough for all readers. Their methodology boils down to first defining the performance metrics for the application, and then setting a target for the metrics. Test scripts that accurately simulate the application usage must then be obtained, and the statistical sampling method and metrics must then be defined. The authors emphasize the need for a realistic 'usage profile' for the application, and they recommend strongly a fixed number of users per test run, with subsequent runs changing the number of users. They do not give quantitative reasons for not varying the number of users, but merely say that such an approach is "statistically incorrect."

They also point out the need for including "think times" between the executions of each request in a script, asserting that the think times will have a very dramatic effect on the observed response times and throughput for a given user load. They are correct in this claim, as testing and modeling studies will show, and they give examples of this in chapter 4 of the book. In addition, they remark that the attempt to simulate more users by decreasing the think time, with the assumption that the resulting data can be then extrapolated to obtain the performance at real think times. They point out, correctly, that applications do not scale linearly over different time scales, and that the application and Web servers, the database server, and the operating system do not interact the same way with different user loads. Performance testers and modelers have verified them time and time again, and so it is beneficial for a reader who might be new to the field to see the case studies illustrating this included in the book.

The authors discuss two sampling methods in the book, namely the 'cycle' method, and the 'snapshot' method. Defining a cycle as a complete execution of a test script by a simulated user, each user will thus execute every request in the script once. Increasing the number of cycles will result in more meaningful statistics, but the time to run a large number of cycles might be too prohibitive. The snapshot method involves capturing the data for a specified period of time.

It is rare to see in books at this level a statement that acknowledges the difficulty in the mathematical or simulation modeling of Internet traffic. The authors though are cognizant of this difficulty, and give some brief suggestions on how to simulate the Internet in a test environment.

The authors also devote a fair amount of time discussing how to assess the accuracy of the test results. The authors report that variability of up to 50% on the performance testing of applications has been observed, and so they propose a measurement of "quality" for the sample data. This is defined as the standard deviation divided by the arithmetic mean, and when close to zero indicates high quality in the sample data. A value above 0.25 for the quality they take as a sign that the tests are not reproducible, and they therefore encourage the running of more cycles of the test in order to pin down the origins of this non-reproducibility. They define a "load factor" to better quantify this, which they define in terms of an "aggregate" average response time. Plotting this quantity versus the number of cycles gives some information on a bad quality indicator.

Frequently, application development using J2EE requires that the impact of design changes or proposals on application performance must be understood. The authors address how performance can be impacted in the context of building servlet applications. The dynamic nature of servlet applications entails that special measures be taken to maximize the performance of the application. The authors discuss how to choose a session mechanism that will preserve the session in user requests, and how to manage the servlet thread pool. Other helpful hints are given on how to increase performance, such as making sure that the auto-reload feature of servlets is disabled in a production environment. In testing the servlet API, the authors choose the snapshot method of data collection, and used zero think times as a baseline, since the real think times are unknown. They use WebLogic Server 6.1 in this discussion however, which makes their presentation somewhat dated, since WebLogic is now in version 8.1. The authors also test the performance when the WebLogic performance pack is activated, for both the average response time and the transactional rate. Also studied is the cost of maintaining HTTP logs, an issue that is very important for those businesses who must keep these logs, either for advertising purposes or other reasons. By running tests, the authors conclude, as expected for those readers who have managed Web servers, that the keeping of log files can have a considerable impact on performance, for a high number of users. The effects of the size of the response generated by the test servlet is also studied, along with the effects of using HTTP 1.0 versus HTTP 1.1.

Rating: 5 stars
Summary: J2EE Performance Testing with BEA WebLogic Server
Review: It was a great book! Had lots of information about Performance Testing. When coupled with the power of Panorama(TM) by Altaworks.com, it is incredible.

Rating: 5 stars
Summary: EBJ chapter rocks
Review: Thanks for making such a good book avaliable. I think the chapter on testing EJB design patterns is very well written. I would recommend this book as it is one of the best I've gotten my hands on.

Rating: 5 stars
Summary: Superb book about performance tuning
Review: This is the best book yet about J2EE performance tuning. I hope 'Expert Press' (which looks like a Wrox imprint) continues as they have started.

The authors lay out a practical method for performance tuning of Web Applications and EJB's on BEA Weblogic, but there is no reason why the approach (and the 'Grinder' tool) cannot be used to evaluate different approaches on any other Web and Application server.

Note that this is a specialized book. It will not teach you how to do Java or EJBs. What it will do is help you evaluate how to deploy them in the real world to get the performance you need, and also to help you evaluate different approaches.


<< 1 >>

© 2004, ReviewFocus or its affiliates