Wednesday, June 23, 2010

WebSphere Performance Tuning

We all know that, the performance of your e-business hosting environment is key to the overall success of your organization’s e-business. So there is always a major focus on tuning your application hosting environment. WebSphere Application Server provides tunable settings for its major components to enable you to make adjustments to better match the run-time environment to the characteristics of your application. For many applications, the default settings will be sufficient to run them with optimal performance. Whereas some application may need some tuning like more heap size or more connection etc. Performance tuning can yield significant gains in performance even if an application is not optimized for performance. But remember, its not only the websphere application server tuning but also the other factors like application design and hard also effects the overall performance.
In this article i’ll try to how we can tune the application hosting environment for better performance. So, this article focuses on the tunable parameters of the major WebSphere Application Server components, and provides insight about how these parameters affect performance. 

Here, we will discuss majority of the tuning parameters of the websphere. Let’s take them in 3 categories.
1. JVM and DB connectivity
2. Messaging/JMS
3. Others (like Caching, transport channels etc)  

1. JVM and DB Connectivity:



In section1, we discuss JVM and DB connectivity related tuning parameters. Namely
a. JVM heap size
b. Thread pool size
c. Connection pool size
d. Data source statement cache size
e. ORB pass by reference 

1A. JVM Heap size:
  Heap size is the most important tuning parameter related to JVM, as it directly influences the performance.

  • Having less Heap size makes the Garbage Collection (GC) to occur more frequently and less no of objects will be created in JVM and hence you may see application failures.

  • Increasing the Heap size makes more objects to be created before the application failure occurs and triggers a GC. This eventually enables the application to run more time between the GC cycles. But more heap size means more time for GC. Hence inthat period your application many not respond sometimes.
Another important parameter in JVM tuning is the garbage collection policy.
There 3 main GC policies are available:

  • Optthruput: performs the mark and sweep operations during garbage collection when the applicaiton is paused to maximize the application throughput. This is the default setting.

  • Optavgpause: performs the mark and sweep concurrently while the application is running to minimize pause times. This setting provides best application response times

  • Gencon: Treats short-lived and long-lived objects differently to provide a combination of lower pause times and high application throughput.
Tuning:
So tuning the JVM heap size and getting a balance between time between 2GCs and time needed for GC to occur, is important. The first step in tuning Heap size is to enable Verbose GC. Enabling verbose GC prints useful JVM information such as amount of free and used bytes in heap, interval between GCs. All this information will logged to native_stderr.logs. You can use various tools available to visualize the heap usage.

Defaults:
Websphere application server default heap settings are 50MB for initial heap and 256MB for maximum.

Note: What happens if we set initial and max heap sizes same?
This prevents JVM from dynamically resizing the heap. Also avoids overhead of allocating and de-allocating more memory. But the startup of JVM will be slower as it has to allocate the more heap at the startup.
Tools to analyze verbose GC output – IBM monitoring and diagnostic tools for Java – Garbage collection and memory visualizer tool (integrated in IBM support assistant).

B. Thread Pools:
 
A thread pool enables components of server to reuse the threads, thereby eliminating the need to create new threads at runtime to service each new request.
Most commonly used thread pools in application server are:

1. Default: used when requests come in for a message driven bean (MDB) or if a particular transport chain has not been defined to a specific thread pool.

2. ORB: used when remote requests come over RMI/IIOP for an enterprise bean from an EJB application client, remote EJB interface or another application server.

3. Web container: used when the requests come over http.

Tuning parameters for Thread pools:

- Minimum size: The minimum number of threads permitted in the pool. When an application server starts, no threads are initially assigned to the thread pool. Threads are added to the thread pool as the workload assigned to the application server requires them, until the number of threads in the pool equals the number specified in the minimum size field. After this point in time, additional threads are added and removed as the workload changes. However, the number of threads in the pool never decreases below the number specified in the minimum size field, even if some of the threads are idle.

- Maximum size: Specifies the maximum number of threads to maintain in the default thread pool.

- Thread inactivity timeout: Specifies the amount of inactivity (in milliseconds) that should elapse before a thread is reclaimed. A value of 0 indicates not to wait, and a negative value (less than 0) means to wait forever.

Defaults:
ThreadPool Minimum Maximun Inactivity Timeout
Default 20 20 5000ms
ORB 10 50 3500ms
Web Container 50 50 60000ms


Tuning
WebSphere application servers Integrated Tivoli performance viewer lets you view the performance monitoring infrastructure (PMI) data associated with thread pools, if you’ve enabled the PMI.
In the Tivoli performance viewer, select the server and expand the parameters list. Go to performance modules->thread pools and select web container. You can see pool size which is average number of threads in the pool and active count which is number of concurrently active threads. Using this information you can decide how many threads are required in a pool. Also you can make use of performance advisors to get recommendations. 

1C. Connection pool

When an application uses a database resource, a connection must be established, maintained and then released when the operation is complete. These processes consume time and resources. The complexity of accessing data from web applications imposes a strain on the system.
An application server enables you to establish a pool of back-end connections that applications can share on the application server. Connection pooling spreads the connection overhead across several user requests, there by conserving application resources for further requests.
Connection pooling is the process of creating predefined number of database connections to a single data source. This process allows multiple users to share connections without requiring each user to incur the overhead of connecting and disconnecting from the database.
Tuning Options:

  • Minimum Connections: The minimum number of physical connections to maintain. If the size of the connection pool is at or below the minimum connection pool size, an unused timeout thread will not discard physical connections. However, the pool does not create connections solely to ensure that the minimum connection pool size is maintained.

  • Maximum Connections: The maximum number of physical connections that can be created in this pool. These are the physical connections to the back-end data store. When this number is reached, no new physical connections are created; requestors must wait until a physical connection that is currently in use is returned to the pool, or until a ConnectionWaitTimeoutException is thrown

  • Thread Inactivity timeout: Specifies the amount of inactivity (in milliseconds) that should elapse before a thread is reclaimed. A value of 0 indicates not to wait and a negative value means to wait forever.
Tuning:
The goal of tuning connection pool is to ensure that each thread that needs a connection to the database has one, and the requests are not queued up waiting to access the database. Since each thread performs a task, each concurrent thread needs a database connection.
· Generally, the maximum connection pool size should be at least as large as maximum size of the web container thread pool.
· Use the same method to both obtain and close the connections.
· Minimize the number of JNDI looksups.
· Do not declare connections as static objects.
· Do not close the connections in the finalize method
· If you open a connection, close the connection
· Do not manage data access in the container managed persistence (CMP) beans.

1D. Data source statement cache size

Data source statement cache size specifies the number of prepared JDBC statements that can be cached per connection. A callable statement removes the need for the SQL compilation process entirely by making a stored procedure call. A statement call is a class that can execute an arbitrary string that is passed to it. The SQL statement is compiled prior to execution, which is a slow process. Applications that repeatedly execute the same SQL statement can decrease processing time by using a prepared statement.
WebSphere application server data source optimizes the processing of prepared statements and callable statements by caching those statements that are being in an active connection.



Tuning
One method is to review the application code for all unique prepared statements and ensure the cache size is larger than that value.
Second options is to iteratively increase the cache size and run the application under peak steady state load until PMI metrics report no more cache discards.

1E. ORB pass by reference

The ORB [object request broker] pass by reference option determines if pass by reference or pass by value semantics should be used when handling parameter objects involved in an EJB request. The ORB pass by reference option treats the invoked EJB method as a local call and avoids the requisite object copy.
The ORB pass by reference option will only provide a benefit when the EJB client and invoked EJB module are located within the same classloader. This means both EJB client and EJB module must be deployed in the same EAR file and running on the same application server instance. If the EJB client and EJB module are mapped to different application server instances, then the EJB module must be invoked remotely using pass by value semantics.
By default, this option is disabled and a copy of each parameter object is made and passed to the invoked EJB method.


2. Messaging/JMS components tuning

There are 2 configurations that can effect the performance of the messaging components in WebSphere
1. Message store type
2. Message reliability
2A. Message store type:
Message stores play an essential part in the operation of messaging engines. Each messaging engine has one and only one message store, which can be either a file store or a data store. A message store enables a messaging engine to preserve operating information and to retain those objects that messaging engines need for recovery in the event of a failure.

  • Local derby database: This is a local, in-process derby database used to store the operational information and messages associated with the messaging engine.  This is best suitable for development environments. This configuration uses memory within application server to manage the stored messages.

  • File based: This is the default option. If this is used, operating information and messages are persisted to the file system. If we are using faster disks or RAID etc, this can perform better than the derby database option.

  • Remote Database: In this, a database hosted on a different machine acts as a data store. This enables the application server JVM to free up the memory it used in case of Derby or file store configurations. This is the best option for Production environments.
Tuning Considerations:

  •    1. Better performance


    • To achieve best performance using a data store, you often need to use a separate remote database server. A file store can exceed the performance of a data store using a remote database server without needing a separate server.

  •    2. Low administration requirements

    • The file store combines high throughput with little or no administration. This makes it suitable for those who do not want to worry about where the messaging engine is storing its recoverable data. A file store improves on the throughput, scalability, and resilience of Derby.

  •    3. Lower deployment costs

    • Use of data store might require database administration to configure and manage your messaging engines. File store can be used in environments without a database server.

2B. Message reliability
Websphere provides 5 options for message reliability

  • Best effort non-persistent

    • Messages are discarded when a messaging engine stops or fails. Messages might also be discarded if a connection used to send them becomes unavailable and as a result of constrained system resources.

  • Express non-persistent

    • Messages are discarded when a messaging engine stops or fails. Messages might also be discarded if a connection used to send them becomes unavailable.

  • Reliable non-persistent

    • Messages are discarded when a messaging engine stops or fails.

  • Reliable persistent

    • Messages might be discarded when a messaging engine fails.

  • Assured persistent

    • Messages are not discarded.
Persistent messages are always stored in some form of persistent store. Non-persistent messages are stored generally in volatile memory. Message reliability and message delivery speed are always inversely proportional. Means, non-persistent messages will be delivered fast but will not survive messaging engine stops, crash etc.. Where as persistent messages can survive but the delivery of the messages will be slow compared to non-persistent messages.
Refer to learn more about Message reliability: http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.pmc.doc/tasks/tjm0003_.html

3. Others

3A. Caching
Caching is always important in any performance tuning. Websphere servers provide some options for caching as well.

  • DynaCache provides in-memory caching service for objects and page fragments generated by the server.  The distributed map and distributedObjectCache interfaces can be used within an application to cache and share java objects by storing references to these objects in the cache.

  • Servlet caching enables servlet and JSP response to be stored and managed by a set of cache rules.
for more information on this topic: refer to ‘dynamic cahing‘ posted earlier.

3B. Disable unused services
Again this is generic for any performance tuning. Always turn-off the features which doesn’t require. This will make sure you use less memory to run your websphere. One such example is PMI. If you are using a third party application for monitoring and doesn’t need in-built PMI features, turn it off.

3C. Web Server
Try to keep the web server on a different machine, so that Websphere and Web server do not need to share the operating system resources like process. memory etc..

3D. Http transport connections
Persistent connections indicates that, an outgoing HTTP response should use a keep-alive connection instead of a connection that closes after one response or request exchange.  So, by increasing the maximum no of persistent requests per connection, we can see some performance gain. Also we can tune no of requests handled per connection. Sometimes, keeping a connection open can be a security concern.


-------------------------------------------------------------------------------------------------------------------

No comments:

Post a Comment