WebSphere Application Server - Performance Tuning: Adjusting WebSphere Application Server System Queues

Abstract

This tip describes the WebSphere Application Server queuing network and how the queuing network can be configured/adjusted to improve WebSphere performance.

Contents


Adjusting WebSphere Application Server system queues

WebSphere Application Server establishes a queuing network, which is a group of interconnected queues that represent various components. There are queues established for the network, Web server, Web container, EJB container, Object Request Broker (ORB), data source, and possibly a connection manager to a custom back-end system. Each of these resources represents a queue of requests waiting to use that resource. Queues are load-dependent resources. As such, the average reponse time of a request depends on the number of concurrent clients.

As an example, think of an application consisting of servlets and EJBs which accesses a back-end database. Each of these application elements resides in the appropriate WebSphere component (for example, servlets in the Web container) and each component can handle a certain number of requests in a given time frame.

A client request enters the Web server and travels through WebSphere components in order to provide a response to the client. The following figure illustrates the processing path this application takes through the WebSphere components as interconnected pipes that form a large tube.

Queuing network

The width of the pipes (illustrated by height) represents the number of requests that can be processed at any given time. The length represents the processing time that it takes to provide a response to the request.

In order to find processing bottlenecks, it is useful to calculate a transactions per second (tps) ratio for each component. Ratio calculations for a fictional application are shown in this example:

The Web Server can process 50 requests in 100 ms = 50req / 0.1s = 500tps

The Web container parts can process 18 requests in 300 ms = 18req / 0.3s = 60tps

The EJB container parts can process 9 requests in 150 ms = 9req / 0.15s = 60tps

The datasource can process 40 requests in 50 ms = 40req / 0.05s = 800tps

The example shows that the application elements in the Web container and in the EJB container process requests at the same speed. Nothing would be gained from increasing the processing speed of the servlets and/or the Web container because the EJB container would still only handle 60 transactions per second. The requests normally queued at the Web container would simply shift to the queue for the EJB container.

This example illustrates the importance of queues in the system. Looking at the operations ratio of the Web and EJB containers, we see that each is able to process the same number of requests over time. However, the Web container could produce twice the number of requests that the EJB container could process at any given time. In order to keep the EJB container fully utilized, the other half of the requests must be queued.

It should be noted that it is common for applications to have more requests processed in the Web server and Web container than by EJBs and back-end systems. As a result, the queue sizes would be progressively smaller, moving deeper into the WebSphere components. This is one of the reasons that queue sizes should not solely depend on the operation ratios.

The following section outlines a methodology for configuring the WebSphere Application Server queues. The dynamics of an individual system can be dramatically changed by moving resources, for example, moving the database server onto another machine, or providing more powerful resources, for example a faster set of CPUs with more memory. Thus, adjustments to the tuning parameters are for a specific configuration of the production environment.

Queuing before WebSphere

The first rule of tuning is to minimize the number of requests in WebSphere Application Server queues. In general, requests should wait in the network (in front of the Web server), rather than in WebSphere Application Server. This configuration allows only those requests that are ready to be processed to enter the queuing network. To accomplish this, specify that the queues farthest upstream (closest to the client) are slightly larger, and that the queues farther downstream (farthest from the client) are progressively smaller.

As an example, the queuing network becomes progressively smaller as work flows downstream. When 200 client requests arrive at the Web server, 125 requests remain queued in the network because the Web server is set to handle 75 concurrent clients. As the 75 requests pass from the Web server to the Web container, 25 remain queued in the Web server and the remaining 50 are handled by the Web container. This process progresses through the data source until 25 user requests arrive at the final destination, the database server. Because there is work waiting to enter a component at each point upstream, no component in this system must wait for work to arrive. The bulk of the requests waits in the network, outside of WebSphere Application Server. This type of configuration adds stability, because no component is overloaded. The Edge Server Components can be used to direct waiting users to other servers in a WebSphere Application Server cluster.

Note: If resources are more readily available on the application server or database server, it may be appropriate to tune such that every request from the Web server has an available application server thread, and every application server thread has an available database connection. The need for this type of configuration depends on the application and overall site design.

For more information about how to determine the optimum queue sizes, please see TIPS0245, IBM WebSphere Application Server - Performance Tuning: Determining Optimum Queue Sizes.

Special Notices

This material has not been submitted to any formal IBM test and is published AS IS. It has not been the subject of rigorous review. IBM assumes no responsibility for its accuracy or completeness. The use of this information or the implementation of any of these techniques is a client responsibility and depends upon the client's ability to evaluate and integrate them into the client's operational environment. publib-b.boulder.ibm.com

Profile

Publish Date
22 July 2003


Rating:
(based on 8 reviews)


Author(s)

IBM Form Number
TIPS0244