SlideShare a Scribd company logo
© 2010 IBM Corporation
Conference materials may not be reproduced in whole or in part without the prior written permission of IBM.
Advanced Performance Tactics forAdvanced Performance Tactics for
WebSphere PerformanceWebSphere Performance
Session Number: W23Session Number: W23
Hendrik van RunHendrik van Run –– hvanrun@uk.ibm.comhvanrun@uk.ibm.com
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
2
Agenda
• Introduction to WebSphere Performance
WebSphere and Your JEE Application
• Important Areas for Performance
Thread pooling
High Availability Manager
Service Integration Bus
JDBC Resource Adapter
EJB Container
Dynamic Cache Service
eXtreme Scale
Java Virtual Machine
Hardware and Operating System
• Best Practices from the Field
A number of lessons learned in recent years
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
3
Introduction to WebSphere Performance
WebSphere and Your JEE Application
Operating System
JVM process
Relational
Database
Physical Hardware
or HyperVisor
WebSphere
WebContainer
JDBC
Resource Adapter
Outbound SOAP/HTTP
Connection Pool
Service Integration Bus
Resource Adapter
DynamicCache
Service
Transaction
Service
JEE
application
Web Services
Application
HTTP
SOAP/HTTP
EJB ContainerRMI/IIOP
Filesystem
SOAP/HTTP
NAS v4.0 or
SAN filesystem
WebSphere MQ JMS
Resource Adapter
WebSphere
MQ
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
4
Important Areas for Performance
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
5
Important Areas for Performance
Thread pooling
• Thread pools control in WebSphere control the execution of application
code
Determines how (logical) CPUs are assigned application requests
Provides a queuing mechanism that controls how many concurrent requests
can be executed in parallel
Apply the funnel based approach to sizing these pools
Example
IBM HTTP Server (maxclients) 600
WAS Web Container thread pool 30
JDBC connection pool 15
• Thread pools need to be sized with the total number of hardware processor cores in mind
Most likely to see best performance with a relative small number of active threads
Overhead associated with context switching is lower
Default size for WAS thread pools are typically quite high
Two threads for each physical processor core is a good starting point
Optimum varies widely with application and usage pattern
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
6
Important Areas for Performance
Thread pooling
• Understand which thread pools your application uses and size all of them
appropriately based on utilisation you see in tuning exercises
Thread dumps, PMI metrics, etc will give you this data
Several thread pools exist in WebSphere, including:
Web Container – runs servlet/JSPs and SOAP/HTTP Web Services application code
Default – used for a variety of tasks, including the MDB application code invoked through the
Service Integration Bus
ORB – used for EJBs when called remotely
WMQCommonServices – used for integration with WebSphere MQ
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
7
Important Areas for Performance
High Availability Manager
• High Availability Manager in WebSphere Application Server
Allows singleton services to make themselves highly available
Transaction manager log recovery service in clusters
Default messaging provider (Service Integration Bus)
Allows servers to easily exchange state data
This mechanism is commonly referred to as the bulletin board
Provides a specialised framework for high speed and reliable messaging between
processes
This is used by the data replication service (DRS)
• High Availability Manager operates within a core group
Group of WebSphere JVM processes within a cell
Frequent peer-to-peer communication withine the core group (“heartbeats”)
Ensures a common understanding of the running processes
By default all processes are in the same core group DefaultCoreGroup
• This approach allows the application server to participate in HA environments
WebSphere Application Server is part of the story
True HA requires planning beyond the software to all systems involved
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
8
Important Areas for Performance
High Availability Manager
• WebSphere Network Deployment has been shipped with HA Manager since V6.0
Requires synchronisation between WebSphere processes at start up
More CPU intensive and longer startup time as the core group size increases
• Recommendation to restrict the size of a single core group to 50 or less
High performance sites might require smaller core groups
Instances may be marked down because of delayed heartbeat response
Refer to IBM whitepaper “Best Practices for Large WebSphere Topologies”
http://www.ibm.com/developerworks/websphere/library/techarticles/0710_largetopologies/071
0_largetopologies.html
• Be careful before disabling HA Manager
An number of WebSphere runtime services require HA Manager
Data Replication Services (DRS)
Singleton service failover (SIBus, peer-to-peer transaction log recovery)
Workload management routing
Refer to “When to use a high availability manager”
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd
.doc/info/ae/ae/crun_ha_ham_required.html
HA Manager can be disabled for each process in the cell
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd
.doc/info/ae/ae/trun_ha_ham_enable.html
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
9
Important Areas for Performance
Service Integration Bus
• Message reliability is a key factor in performance
Less reliable transport incurs less overhead
However, correctness and SLAs are key considerations
• WAS supports file and database persistence
Pick the mechanism best suited to your needs
Performance, HA, skills, budget, etc.
• Tuning potential in
this layer
Discardable
and cached
data buffers
Default size
is 320 KB
Adjusting buffer
sizes to avoid
Message
discard, or
Overflow to
persistent
store
BEST_EFFORT_NONPERSISTENT
Messages are never written to disk
throw away messages if memory cache over-runs
EXPRESS_NONPERSISTENT
Messages are written asynchronously to persistent storage if memory cache overruns, but are not
kept over server restarts
No acknowledgement that the ME has received the message
RELIABLE_NONPERSISTENT
Same as Express_Nonpersistent, except, we have a low level acknowledgement message that the
client code waits for, before returning to the application with an OK or not OK response
RELIABLE_PERSISTENT
Messages are written asynchronously to persistent storage during normal processing, and stay
persisted over server restarts.
If the server fails, messages might be lost if they are only held in the cache at the time of failure.
ASSURED_PERSISTENT
Highest degree of reliability where assured delivery is supported
Reliability
Performance
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
10
Important Areas for Performance
Service Integration Bus
• Different options available for message store
Introduced in WebSphere Application Server V6.1
Flat file (new)
JDBC database (traditional)
• Flat file advantages
Generally easier to setup
Faster than database engine for shared footprints
• Flat file HA participation
Place file on highly available, shared drive
GPFS, NFSv4, etc.
Use IBM File System Locking Protocol Test for verification
http://www-01.ibm.com/support/docview.wss?uid=swg21215152
• Messaging improvements in V7.0
More efficient internal memory management in this layer
Gains in almost all areas (persistent, non-persistent, PTP, PubSub)
Performance of larger messages also improved
Server
ME database
JDBC
filesystem
SIBus
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
11
Important Areas for Performance
JDBC Resource Adapter
• JDBC Resource Adapter uses a pool of connections to the database
This pool can be highly contended in multithreaded applications
Correct sizing of the pool can yield significant gains in performance
• Connections in the pool can be monitored through PMI
Watch for threads waiting on connections to the database
The PMI metric “WaitTime” is the average waiting time in milliseconds
If wait time is significant consider doing one of the following
Increase the number of pooled connections in conjunction with your DBA
Decrease the number of active threads in the system
In some cases, a one-to-one mapping between DB connections and threads
may be ideal
• Database problems often manifest themselves as a large number of
threads from your thread pool waiting for avaiable connections
Deadlocks, lock timeouts, long-running SQL queries
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
12
Important Areas for Performance
JDBC Resource Adapter
• Tune the Prepared Statement Cache Size for each JDBC data source
Most application use Prepared Statements
Especially when using persistency frameworks
Hibernate
JPA
EJB entity beans
The number of prepared statements that can be cached is important
Each JDBC data source connection maintains its own individual cache
Default of 50 prepared statements but you can override this
Number of Prepared Statement discards can be monitored through PMI
• Always use the latest JDBC driver for the database you are running
Performance optimisation in this space between versions can be significant
• Use a Type 4 (thin) JDBC driver where possible
Keeps allocation of memory in the JVM native heap to a minimum
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
13
Important Areas for Performance
EJB 3.0
• EJB 3.0
Performance improvements over EJB 2.1
Compared to EJB 2.1 on V7.0
Improved developer experience
Annotated POJOs
More deployment options
Example: Just-in-Time deploy feature
• Caching flexibility with JPA
“Eager” pre-loads related entities
“Lazy” waits for a specific access before loading
Pick the strategy that best fits the application’s use of data
Caching unnecessary data increases memory footprint and GC overhead
Repeated retrievals of same data increases processing cost (CPU, etc.)
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
14
Important Areas for Performance
Dynamic Cache Service
• Policy Based caching (defined via the cachespec.xml configuration file)
Servlet/JSP
Commands
Web Service
Web Service Client – ( JAXRPC )
• API based caching
Distributed Map
Cacheable Servlet
Cacheable Command
• Distribution features
Distribute cache contents to peer instances within a cluster
Push content to HTTP servers and other “edge” components
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
15
Important Areas for Performance
Dynamic Cache Service
• Ability to control the size of cache
Previously only the number of cache entries was configurable
Size of cache in bytes can now be configured
Applies to both in-memory and disk cache
Increase resilience and better manageability
• Disk offload
Allows cache to overflow to disk
Optionally save/persist cache contents to disk upon stop
Significant performance enhancements for disk offload in V6.1
“High performance” mode yields best performance
but requires more memory
Enhancements have been backported to older versions
http://www.ibm.com/support/docview.wss?uid=swg24013097
• Servlet and Object Cache Instances
Multiple cache instances within the same server
Allows for more fine-grained control over cache entries
• Enhanced cache monitor application
http://www.ibm.com/developerworks/websphere/downloads/cache_monitor.html
New in
V7.0
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
16
Important Areas for Performance
WebSphere eXtreme Scale Dynamic Cache Provider
• WebSphere eXtreme Scale can be used as Dynamic Cache Provider
Supported from WAS 6.1.0.25 and WAS 7.0.0.5
No changes to the application required!
• Can provide significant benefit to
Dynamic Cache Service applications
No more redundant data in clusters
Better replication capabilities
Can scale up to very large caches
without relying on disk subsystems
• Choice of two options
WebSphere Extreme Scale 7.0 or 7.1
WebSphere DataPower XC10 appliance
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
17
Important Areas for Performance
Java Virtual Machine
• Sizing the Java heap appropriately is key to good WebSphere performance
Always keep the JVM(s) within the physical memory of the server
• Determine good maximum heap size through testing
Enable verbose:gc and analyse the native_stderr.log
Use “Garbage Collection and Memory Visualizer (GCMV)”
Tool available through IBM Support Assistant
http://www.ibm.com/software/support/isa/
Other tools can be used as well
Good starting point for maximum heap size
512 MB for WebSphere Application Server
1024 MB for WebSphere Process Server
1024 MB for WebSphere Portal Server
• Determining minimum heapsize is usually easier
Production systems set minimum heap size lower than the maximum
Allows the JVM to determine the optimal heap size
Can result in a more efficient object table
Gives headroom for emergencies
Some “Burst” scenarios might require setting the minimum equal to the maximum
Avoids resizing the heap completely
Generational GC on IBM Java 6.0 default nursery size is 25% of maximum heap size
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
18
Important Areas for Performance
Java Virtual Machine
• WAS 6.0
IBM Java 1.4.2 JVM
Flat memory model
Generational model for selected 64 bit editions
32 and 64 bit editions
• WAS 6.1
Introduction of IBM Java 5.0 JVM
Flat or generational garbage collection
Default is flat
32 and 64 bit editions
Support for shared classes
• WAS 7.0
Introduction of IBM Java 6.0 JVM
Flat or generational garbage collection
Default is flat
32 and 64 bit editions
Compressed Reference Technology provides enhanced 64 bit performance
Enhanced support for shared classes
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
19
Important Areas for Performance
Java Virtual Machine – IBM Java 1.4.2 JVM
• Pinned and dosed objects are unmovable during compaction
Example: Native calls generate pinned objects
• Heap fragmentation could become an issue
Especially true with large objects
OutOfMemory occurs even if there appears to be sufficient free heap available
• kCluster is an area of storage for class blocks
Default size = 1280 (entries) x 256 (bytes)
Can be reset via JVM argument –Xk<size>
• pCluster is an area for pinned objects
16KB by default
Newly created pCluster are 2KB in size
Can be reset using but only on AIX when using the subpool GC policy
-Xp<iiii>[K][,<oooo>[K]]
iiii = initial pCluster size
oooo = size of subsequent overflow pClusters
AIX subpool
GC policy
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
20
Important Areas for Performance
Java Virtual Machine – IBM Java 5.0 JVM
• IBM implementation of Java 5.0 runtime
Built to the Java 5.0 specifications
• IBM Java 5.0 JVM manages JNI objects differently
No pinned or dosed objects
Reduces fragmentation
• Two memory models available in IBM Java 5.0 JVM
Flat model
Similar to IBM Java 1.4.2 model
Generational
Similar idea as Sun and HP generational collectors
Simplified model and tuning
• Variety of garbage collection policies
Optimizations for pause times or processing time
Sub-pool support to reduce thread contention
• Improved Just-in-Time (JIT) compiler
Longer warm-up for optimized code
No longer disabled for remote debugging
Faster WAS startup times with remote debugger attached
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
21
Important Areas for Performance
Java Virtual Machine – IBM Java 6.0 JVM
• IBM Java 6.0 introduced Compressed Reference (CR) technology
Reduces the width of 64 bit Java heap references to 32 bits
Implements an efficient bit shifting algorithm to accomplish this
Exploits the fact that all references (pointers) are 8 byte aligned
Can be used for heap sizes up to 28 GB
WAS V7.0 enables this by default for heap sizes up to 25 GB
• WebSphere Application Server 7.0 benefits from CR technology
Reduces 64 bit WebSphere memory footprint back to 32 bit equivalent
Typical memory footprint of 64 bit WebSphere 6.1/6.0 was 60-70% larger than 32 bit
Reduces performance overhead of 64 bit WebSphere
Performance is generally around 95% compared to 32 bit WebSphere
Minimal performance cost is due to reference compression/decompression
• Full details provided in an IBM whitepaper
“IBM WebSphere Application Server WAS V7 64-bit performance – Introducing
WebSphere Compressed Reference Technology”
ftp://ftp.software.ibm.com/software/webserver/appserv/was/WAS_V7_64-bit_performance.pdf
New in
V7.0
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
22
Important Areas for Performance
Java Virtual Machine – IBM Java 6.0 JVM
• IBM Java 6.0 further extends the support for shared classes
IBM Java 5.0 introduced the sharing of common classes
Reduction in startup time and memory footprint
Useful when running many JVM processes on the same machine
• Shared cache information can now be persisted to the filesystem
Cache can survive a system restart
Reduces startup time of WebSphere processes, even after a reboot!
–Xshareclasses:persistent
enables persistent cache (default except on z/OS)
–Xshareclasses:nonpersistent
disables persistent cache
• Shared cache can now also stored Ahead of Time (AOT) compiled code
The Just in Time (JIT) compiler generates AOT compiled code (native code)
Speeds up execution of application code
Compilation can now be avoided if the AOT compiled code is in the shared cache
IBM Java 5.0 only allowed for sharing the static class data
Example: –Xscmx50M –Xscminaot5M –Xscmaxaot10M
Creates a 50MB cache, guaranteeing at least 5MB of space but no more than 10MB for AOT
compiled code
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
23
Important Areas for Performance
Hardware and Operating System – Partitioning on System p
• Partitioning capabilities are
very powerful
Static LPARs
Dynamic LPARs
Dynamic Micro-Partitioning
WPARs
• Beware of performance impact of
Dynamic Micro-Partitioning
Fractional amount of physical CPU
resources can be assigned to LPARs
Processor cache of physical CPU will
be flushed frequently
Frequent flushing of processor cache
can significantly reduce performance
for applications that are sensitive to
the efficiency of these caches, for
example:
WebSphere Application Server
DB2 Universal Database
LPAR
Physical
Hardware
Operating
System
LPAR
Operating
System
LPAR
Operating
System
Virtual CPU
Physical CPU
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
24
Important Areas for Performance
Hardware and Operating System – Large Page support on System p
• Large Page support on Power 4 or higher
16 MB memory pages instead of default 4 KB
10-15% better performance for processes that use a lot of memory
For example WebSphere Application Server
Large page support requires all memory pages in a 256 MB segment to be
large pages
Requires a change to OS configuration plus reboot
• Medium Page support on Power5+ and higher
64 KB memory pages instead of default 4 KB
Almost the same performance boost as 16 MB pages
No need to change OS configuration
• Full details in WebSphere information center
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.webs
phere.nd.doc/info/ae/ae/tprf_tuneaix.html
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
25
Best Practices from the Field
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
26
Best Practices from the Field
Outbound SOAP/HTTP Web Services bottleneck
• Connections for outbound SOAP/HTTP Web Services calls are pooled
Default size of this pool is set to 25
The reasoning is that this pool should never exceed the number of threads in the Web Container thread pool
This might be valid when the Web Services client is running in the same JVM as the server
Several scenarios where the above is not the case
Pool can be sized through JVM custom property
com.ibm.websphere.webservices.http.maxConnection
Several other custom properties available to deal with timeouts, proxies, etc
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/
rwbs_httptransportprop.html
• A bottleneck here can be easily seen from a threaddump
Note method OutboundConnectionCache.findGroupAndGetConnection
3XMTHREADINFO "Default : 2030" (TID:0x0000008097D66900, sys_thread_t:0x0000008080C63520, state:CW, native ID:0x0000000000000612) prio=5
4XESTACKTRACE at java/lang/Object.wait(Native Method)
4XESTACKTRACE at java/lang/Object.wait(Object.java:231(Compiled Code))
4XESTACKTRACE at
com/ibm/ws/webservices/engine/transport/channel/OutboundConnectionCache.findGroupAndGetConnection(OutboundConnectionCache.java:317(Compiled
Code))
4XESTACKTRACE at com/ibm/ws/webservices/engine/transport/http/HTTPSender.invoke(HTTPSender.java:521(Compiled Code))
...
4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/OnlineChannelFactoryITFSelectorImpl.send(Bytecode PC:22(Compiled
Code))
...
4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/JavaSRMRequestHandlerComponentImpl.handleRequest4Supplier(Bytecode
PC:486(Compiled Code))
3XMTHREADINFO "Default : 2030" (TID:0x0000008097D66900, sys_thread_t:0x0000008080C63520, state:CW, native ID:0x0000000000000612) prio=5
4XESTACKTRACE at java/lang/Object.wait(Native Method)
4XESTACKTRACE at java/lang/Object.wait(Object.java:231(Compiled Code))
4XESTACKTRACE at
com/ibm/ws/webservices/engine/transport/channel/OutboundConnectionCache.findGroupAndGetConnection(OutboundConnectionCache.java:317(Compiled
Code))
4XESTACKTRACE at com/ibm/ws/webservices/engine/transport/http/HTTPSender.invoke(HTTPSender.java:521(Compiled Code))
...
4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/OnlineChannelFactoryITFSelectorImpl.send(Bytecode PC:22(Compiled
Code))
...
4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/JavaSRMRequestHandlerComponentImpl.handleRequest4Supplier(Bytecode
PC:486(Compiled Code))
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
27
Best Practices from the Field
Configure Aggressive Hung Thread Detection Policy
• WebSphere has a built-in detection mechanism for long running threads
Logs a message to SystemOut.log if a thread has been active for over 600 seconds
Detection logic runs once every 180 seconds
• For OLPT workloads threads are not expected to run for more than a few seconds
Default threshold of 600 seconds is excessive
Long running threads keep hold of other resources so this condition should be avoided
Common misunderstanding that transaction timeouts would stop a running thread
Threshold can be overriden through a JVM custom property
com.ibm.websphere.threadmonitor.threshold
• WebSphere can also generate a threaddump when long running threads are detected
Can be very helpful for root-cause analysis especially in production environments
Can be enabled through a JVM custom property
com.ibm.websphere.threadmonitor.dump.java
• Full details available in the information center
http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/i
nfo/ae/ae/ttrb_confighangdet.html
[18/02/10 13:05:14:377 GMT] 00000014 ThreadMonitor W WSVR0605W: Thread "WebContainer : 52" (00000113) has
been active for 614687 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may
be hung.
[18/02/10 13:05:14:377 GMT] 00000014 ThreadMonitor W WSVR0605W: Thread "WebContainer : 52" (00000113) has
been active for 614687 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may
be hung.
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
28
Best Practices from the Field
High CPU Utilisation with WebSphere V7.0 on AIX
• Customers observe high CPU utilisation when running WAS V7.0 on AIX
There is a known defect in IBM Java 6.0 SR6 and SR7 on AIX
The Java attach agent gets stuck in a loop trying to open a metaphore
http://www-01.ibm.com/support/docview.wss?uid=isg1IZ73533
• The above problem can be confirmed from a thread dump as shown below
• Problem has been resolved in IBM Java 6.0 SR8
Workaround is to disable the Java attach agent
Use JVM custom property com.ibm.tools.attach.enable=no
3XMTHREADINFO "Attach handler" J9VMThread:0x00000000300A4400, j9thread_t:0x0000000111DA5DC0,
java/lang/Thread:0x0000000040035980, state:CW, prio=6
3XMTHREADINFO1 (native thread ID:0x26600CD, native priority:0x6, native policy:UNKNOWN)
3XMTHREADINFO3 Java callstack:
4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.openSemaphoreImpl(Native Method)
4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.reopenSemaphore(IPC.java:182(Compiled Code))
4XESTACKTRACE at
com/ibm/tools/attach/javaSE/AttachHandler.waitForNotification(AttachHandler.java:181(Compiled Code))
4XESTACKTRACE at com/ibm/tools/attach/javaSE/AttachHandler.run(AttachHandler.java:163(Compiled
Code))
3XMTHREADINFO3 Native callstack:
4XENATIVESTACK _event_wait+0x2b8 (0x090000000070985C [libpthreads.a+0x1685c])
4XENATIVESTACK _cond_wait_local+0x4e4 (0x0900000000717568 [libpthreads.a+0x24568])
4XENATIVESTACK _cond_wait+0xbc (0x0900000000717B40 [libpthreads.a+0x24b40])
4XENATIVESTACK pthread_cond_wait+0x1a8 (0x09000000007187AC [libpthreads.a+0x257ac])
3XMTHREADINFO "Attach handler" J9VMThread:0x00000000300A4400, j9thread_t:0x0000000111DA5DC0,
java/lang/Thread:0x0000000040035980, state:CW, prio=6
3XMTHREADINFO1 (native thread ID:0x26600CD, native priority:0x6, native policy:UNKNOWN)
3XMTHREADINFO3 Java callstack:
4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.openSemaphoreImpl(Native Method)
4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.reopenSemaphore(IPC.java:182(Compiled Code))
4XESTACKTRACE at
com/ibm/tools/attach/javaSE/AttachHandler.waitForNotification(AttachHandler.java:181(Compiled Code))
4XESTACKTRACE at com/ibm/tools/attach/javaSE/AttachHandler.run(AttachHandler.java:163(Compiled
Code))
3XMTHREADINFO3 Native callstack:
4XENATIVESTACK _event_wait+0x2b8 (0x090000000070985C [libpthreads.a+0x1685c])
4XENATIVESTACK _cond_wait_local+0x4e4 (0x0900000000717568 [libpthreads.a+0x24568])
4XENATIVESTACK _cond_wait+0xbc (0x0900000000717B40 [libpthreads.a+0x24b40])
4XENATIVESTACK pthread_cond_wait+0x1a8 (0x09000000007187AC [libpthreads.a+0x257ac])
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
29
Best Practices from the Field
Energy savings mode can limit performance
• Modern platforms such as Power7
provide several energy saving options
“Dynamic Power Saver” – reduces CPU
frequency depending on worload
“Static Power Saver” – reduces CPU
frequency permanently
• Make sure that “Static Power Saver” is
not enabled when performance is vital
Can be disabled through Hardware
Management Console (HMC)
Use “lparstat” to confirm this
http://www.ibm.com/developerworks/wikis/display/Wiki
Ptype/CPU+frequency+monitoring+using+lparstat
lp_newpap_904_st: /home/wasadmin>lparstat –E 2 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=16 mem=34816MB psize=12 ent=4.00
--------Actual-------- freq ------Normalised------
user sys wait idle --------- user sys wait idle
12.56 2.510 0.249 0.681 2.1GHz[ 70%] 8.792 1.757 0.174 5.277
12.13 2.602 0.237 1.031 2.1GHz[ 70%] 8.491 1.821 0.166 5.522
lp_newpap_904_st: /home/wasadmin>lparstat –E 2 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=16 mem=34816MB psize=12 ent=4.00
--------Actual-------- freq ------Normalised------
user sys wait idle --------- user sys wait idle
12.56 2.510 0.249 0.681 2.1GHz[ 70%] 8.792 1.757 0.174 5.277
12.13 2.602 0.237 1.031 2.1GHz[ 70%] 8.491 1.821 0.166 5.522
Physical CPU vs Entitlement - lp_newpap_904_st
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
PhysicalCPU entitled
70%
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
30
Best Practices from the Field
Avoid Power7 LPARs that span multiple processor sockets
• A single Power7 consists of multiple cores
4, 6 or 8 cores depending on model
• LPARs that span multiple chips/sockets
incur a performance overhead
Numbers ???
• Recommendations
Restrict the size of the LPAR to the number
of cores per chip
Ensure that all cores from that LPAR are
assigned to the same chip
$ lssrad -va
REF1 SRAD MEM CPU
0 0 6180.69 24-31
1 1 498.00 56-63
$ lssrad -va
REF1 SRAD MEM CPU
0 0 7695.19 0-31
1 1 236.25 32-63
$ lssrad -va
REF1 SRAD MEM CPU
0 0 6180.69 24-31
1 1 498.00 56-63
$ lssrad -va
REF1 SRAD MEM CPU
0 0 7695.19 0-31
1 1 236.25 32-63
Bad;
Good;
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
31
Best Practices from the Field
Measuring and Monitoring
• Obtaining meaningful system metrics
from virtualized or partitioned
environments
• Measuring inside the VM image or
partition does not represent the whole
environment
• Use the appropriate commands to view
system resources vs. image/partition
resources
Micropartitioned environment might
show 95% CPU utilization from some
commands
Looking at the broader environment
The partition is using 95% of the CPU
currently allocated
The hypervisor has additional CPU in
reserve pending allocation
Thus the server has not exhausted its
CPU yet
LPAR
Physical
Hardware
Operating
System
LPAR
Operating
System
LPAR
Operating
System
Virtual CPU
Physical CPU
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
32
Best Practices from the Field
Do you really need 64 bit WebSphere?
• Myth – “If the OS is 64 bit, we must use 64-bit WebSphere as well”
• WebSphere 32 bit edition
Supported for most 64 OS environments
Check the support matrix for details
Benefits from a smaller physical memory footprint
This is true on both 32 and 64 bit OS
Can more easily use the full 32 bit address space
No limitations due to reserved address space for kernel or libraries
• WebSphere 64 bit edition
Allows JVM to grow well beyond 32bit process size boundaries
Memory constrained applications can see substantial benefits
Allows for more extensive use of caching
The use of 64 bit registers benefits certain application code
Security algorithms are a good example
Performance penalty and Java memory overhead are relatively small on WAS V7.0
Compressed Reference is enabled by default for heap sizes up to 25 GB
Native memory usage is still higher than 32 bit
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
33
Summary
• WebSphere performance spans many areas and features
• High-end function and features provide performance
when used appropriately
• Hardware and OS selection play important roles in performance
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
34
Questions?Questions?
• Thank you for attending!
Please complete your session evaluation, session number is W23
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
35
Further Reading
Performance Analysis for Java Web Sites
Joines, Willenborg, Hygh
IBM WebSphere Deployment and Advanced Configuration
Barcia, et al
IBM WebSphere System Administration
Williamson, et al
WebSphere Application Server: Step by Step
Turaga, et al
Persistence in the Enterprise
Barcia, Hambrick, et al
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
36
References
• “Best Practices for Large WebSphere Topologies”
http://www.ibm.com/developerworks/websphere/library/techarticles/0710_large
topologies/0710_largetopologies.html
• “IBM WebSphere Application Server WAS V7 64-bit performance –
Introducing WebSphere Compressed Reference Technology”
ftp://ftp.software.ibm.com/software/webserver/appserv/was/WAS_V7_64-
bit_performance.pdf
• IBM Support Assistant
http://www.ibm.com/software/support/isa/
• Many thanks to Stacy Joines, John Stecher and Chris Blythe for their
contributions to this presentation
WebSphere Technical Conference and Portal
Excellence Conference
© 2010 IBM Corporation
37
Other Sessions
• W08 – IBM WebSphere eXtreme Scale: Introduction
• W17 – WAS 7 and Java 6 Performance Tuning
• W21 – Service Integration Bus in WAS 7.0

More Related Content

PPT
IBM MQ Online Tutorials
PPTX
Websphere Application Server V8.5
PDF
Learn Oracle WebLogic Server 12c Administration
PDF
Microservices with Spring Boot Tutorial | Edureka
PPT
Oracle WebLogic Server Basic Concepts
PPTX
Weblogic
PPTX
Study Notes: Google Percolator
PDF
VMware Tanzu Introduction- June 11, 2020
IBM MQ Online Tutorials
Websphere Application Server V8.5
Learn Oracle WebLogic Server 12c Administration
Microservices with Spring Boot Tutorial | Edureka
Oracle WebLogic Server Basic Concepts
Weblogic
Study Notes: Google Percolator
VMware Tanzu Introduction- June 11, 2020

What's hot (20)

PDF
MySQL InnoDB Cluster - A complete High Availability solution for MySQL
PPT
IBM Websphere MQ Basic
PDF
Oracle WebLogic Diagnostics & Perfomance tuning
PDF
Microservice With Spring Boot and Spring Cloud
PDF
OpenShift 4 installation
PPTX
Realizzazione di Microservizi con Docker, Kubernetes, Kafka e Mongodb
PDF
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019
PDF
ActiveMQ In Action
PDF
IBM MQ Whats new - up to 9.3.4.pdf
PDF
IBM MQ: Managing Workloads, Scaling and Availability with MQ Clusters
PPT
WebLogic Scripting Tool Overview
PPTX
Red Hat Openshift Fundamentals.pptx
PDF
Advanced performance troubleshooting using esxtop
PPT
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
PPTX
IBM WebSphere Application Server version to version comparison
PDF
Improving Performance of Micro-Frontend Applications through Error Monitoring
PDF
MySQL High Availability and Disaster Recovery with Continuent, a VMware company
PDF
[2018] MySQL 이중화 진화기
PPT
VMWARE ESX
PDF
Cloudera cluster setup and configuration
MySQL InnoDB Cluster - A complete High Availability solution for MySQL
IBM Websphere MQ Basic
Oracle WebLogic Diagnostics & Perfomance tuning
Microservice With Spring Boot and Spring Cloud
OpenShift 4 installation
Realizzazione di Microservizi con Docker, Kubernetes, Kafka e Mongodb
What's the time? ...and why? (Mattias Sax, Confluent) Kafka Summit SF 2019
ActiveMQ In Action
IBM MQ Whats new - up to 9.3.4.pdf
IBM MQ: Managing Workloads, Scaling and Availability with MQ Clusters
WebLogic Scripting Tool Overview
Red Hat Openshift Fundamentals.pptx
Advanced performance troubleshooting using esxtop
IBM Integration Bus & WebSphere MQ - High Availability & Disaster Recovery
IBM WebSphere Application Server version to version comparison
Improving Performance of Micro-Frontend Applications through Error Monitoring
MySQL High Availability and Disaster Recovery with Continuent, a VMware company
[2018] MySQL 이중화 진화기
VMWARE ESX
Cloudera cluster setup and configuration
Ad

Similar to W23 - Advanced Performance Tactics for WebSphere Performance (20)

PDF
WSI33 - Advanced Performance Tactics for IBM WebSphere Application Server
PPTX
VMware vFabric Data Director for DB as a Service
PDF
Seminar Windows Server 2012 features
PPT
SharePoint Topology
PPTX
Hyper-V’s Virtualization Enhancements - EPC Group
PPTX
Sp2010 high availlability
PPTX
SQL PASS Taiwan 七月份聚會-1
PPT
Was l iberty for java batch and jsr352
PPT
J2EE Batch Processing
PPTX
V mware v fabric 5 - what's new technical sales training presentation
PDF
A Beginners Guide to Web Hosting
PPTX
Multi Tenancy In The Cloud
PPTX
Keynote talk on Windows 8 - Jeff Stokes
PDF
Architecting with power vm
PDF
Introduction to weblogic
PPTX
Emc sql server 2012 overview
PPTX
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
PPTX
50 Shades of SharePoint: SharePoint 2013 Insanity Demystified
PPTX
Bca1931 final
PDF
Big App Workloads on Microsoft Azure - TechEd Europe 2014
WSI33 - Advanced Performance Tactics for IBM WebSphere Application Server
VMware vFabric Data Director for DB as a Service
Seminar Windows Server 2012 features
SharePoint Topology
Hyper-V’s Virtualization Enhancements - EPC Group
Sp2010 high availlability
SQL PASS Taiwan 七月份聚會-1
Was l iberty for java batch and jsr352
J2EE Batch Processing
V mware v fabric 5 - what's new technical sales training presentation
A Beginners Guide to Web Hosting
Multi Tenancy In The Cloud
Keynote talk on Windows 8 - Jeff Stokes
Architecting with power vm
Introduction to weblogic
Emc sql server 2012 overview
Track 1 Virtualizing Critical Applications with VMWARE VISPHERE by Roshan Shetty
50 Shades of SharePoint: SharePoint 2013 Insanity Demystified
Bca1931 final
Big App Workloads on Microsoft Azure - TechEd Europe 2014
Ad

More from Hendrik van Run (18)

PDF
Open shift deployment review getting ready for day 2 operations
PDF
WSI35 - WebSphere Extreme Scale Customer Scenarios and Use Cases
PDF
WSI32 - IBM WebSphere Performance Fundamentals
PDF
W22 - WebSphere Performance for Multicore and Virtualised Platforms
PDF
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
PDF
2596 - Integrating PureApplication System Into Your Network
PDF
1457 - Reviewing Experiences from the PureExperience Program
PDF
CSD-2881 - Achieving System Production Readiness for IBM PureApplication System
PDF
ACU-1445 - Bringing workloads into production on PureApplication System
PDF
CIT-2697 - Customer Success Stories with IBM PureApplication System
PDF
CIN-2650 - Cloud adoption! Enforcer to transform your organization around peo...
PDF
IC6284A - The Art of Choosing the Best Cloud Solution
PDF
C219 - Docker and PureApplication Patterns: Better Together
PDF
PAD-3126 - Evolving the DevOps Organization around IBM PureApplication System...
PDF
C418 - Build, Deploy and Manage Your First Open Pattern with PureApplication ...
PDF
7450A - CRONOS helping ENGIE adopting Private Cloud with Bluemix Local System
PDF
IBM Cloud University 2017 session BLUE010 - How Dutch Tax Built Their Core Bu...
PDF
IBM Think 2019 session 2116 - Best practices for operating and managing a pro...
Open shift deployment review getting ready for day 2 operations
WSI35 - WebSphere Extreme Scale Customer Scenarios and Use Cases
WSI32 - IBM WebSphere Performance Fundamentals
W22 - WebSphere Performance for Multicore and Virtualised Platforms
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2596 - Integrating PureApplication System Into Your Network
1457 - Reviewing Experiences from the PureExperience Program
CSD-2881 - Achieving System Production Readiness for IBM PureApplication System
ACU-1445 - Bringing workloads into production on PureApplication System
CIT-2697 - Customer Success Stories with IBM PureApplication System
CIN-2650 - Cloud adoption! Enforcer to transform your organization around peo...
IC6284A - The Art of Choosing the Best Cloud Solution
C219 - Docker and PureApplication Patterns: Better Together
PAD-3126 - Evolving the DevOps Organization around IBM PureApplication System...
C418 - Build, Deploy and Manage Your First Open Pattern with PureApplication ...
7450A - CRONOS helping ENGIE adopting Private Cloud with Bluemix Local System
IBM Cloud University 2017 session BLUE010 - How Dutch Tax Built Their Core Bu...
IBM Think 2019 session 2116 - Best practices for operating and managing a pro...

Recently uploaded (20)

PDF
Machine learning based COVID-19 study performance prediction
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Big Data Technologies - Introduction.pptx
PPTX
Cloud computing and distributed systems.
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Electronic commerce courselecture one. Pdf
PDF
Approach and Philosophy of On baking technology
PDF
KodekX | Application Modernization Development
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPT
Teaching material agriculture food technology
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Encapsulation theory and applications.pdf
Machine learning based COVID-19 study performance prediction
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Mobile App Security Testing_ A Comprehensive Guide.pdf
Network Security Unit 5.pdf for BCA BBA.
Big Data Technologies - Introduction.pptx
Cloud computing and distributed systems.
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Unlocking AI with Model Context Protocol (MCP)
Electronic commerce courselecture one. Pdf
Approach and Philosophy of On baking technology
KodekX | Application Modernization Development
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Teaching material agriculture food technology
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
cuic standard and advanced reporting.pdf
Encapsulation theory and applications.pdf

W23 - Advanced Performance Tactics for WebSphere Performance

  • 1. © 2010 IBM Corporation Conference materials may not be reproduced in whole or in part without the prior written permission of IBM. Advanced Performance Tactics forAdvanced Performance Tactics for WebSphere PerformanceWebSphere Performance Session Number: W23Session Number: W23 Hendrik van RunHendrik van Run –– hvanrun@uk.ibm.comhvanrun@uk.ibm.com
  • 2. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 2 Agenda • Introduction to WebSphere Performance WebSphere and Your JEE Application • Important Areas for Performance Thread pooling High Availability Manager Service Integration Bus JDBC Resource Adapter EJB Container Dynamic Cache Service eXtreme Scale Java Virtual Machine Hardware and Operating System • Best Practices from the Field A number of lessons learned in recent years
  • 3. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 3 Introduction to WebSphere Performance WebSphere and Your JEE Application Operating System JVM process Relational Database Physical Hardware or HyperVisor WebSphere WebContainer JDBC Resource Adapter Outbound SOAP/HTTP Connection Pool Service Integration Bus Resource Adapter DynamicCache Service Transaction Service JEE application Web Services Application HTTP SOAP/HTTP EJB ContainerRMI/IIOP Filesystem SOAP/HTTP NAS v4.0 or SAN filesystem WebSphere MQ JMS Resource Adapter WebSphere MQ
  • 4. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 4 Important Areas for Performance
  • 5. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 5 Important Areas for Performance Thread pooling • Thread pools control in WebSphere control the execution of application code Determines how (logical) CPUs are assigned application requests Provides a queuing mechanism that controls how many concurrent requests can be executed in parallel Apply the funnel based approach to sizing these pools Example IBM HTTP Server (maxclients) 600 WAS Web Container thread pool 30 JDBC connection pool 15 • Thread pools need to be sized with the total number of hardware processor cores in mind Most likely to see best performance with a relative small number of active threads Overhead associated with context switching is lower Default size for WAS thread pools are typically quite high Two threads for each physical processor core is a good starting point Optimum varies widely with application and usage pattern
  • 6. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 6 Important Areas for Performance Thread pooling • Understand which thread pools your application uses and size all of them appropriately based on utilisation you see in tuning exercises Thread dumps, PMI metrics, etc will give you this data Several thread pools exist in WebSphere, including: Web Container – runs servlet/JSPs and SOAP/HTTP Web Services application code Default – used for a variety of tasks, including the MDB application code invoked through the Service Integration Bus ORB – used for EJBs when called remotely WMQCommonServices – used for integration with WebSphere MQ
  • 7. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 7 Important Areas for Performance High Availability Manager • High Availability Manager in WebSphere Application Server Allows singleton services to make themselves highly available Transaction manager log recovery service in clusters Default messaging provider (Service Integration Bus) Allows servers to easily exchange state data This mechanism is commonly referred to as the bulletin board Provides a specialised framework for high speed and reliable messaging between processes This is used by the data replication service (DRS) • High Availability Manager operates within a core group Group of WebSphere JVM processes within a cell Frequent peer-to-peer communication withine the core group (“heartbeats”) Ensures a common understanding of the running processes By default all processes are in the same core group DefaultCoreGroup • This approach allows the application server to participate in HA environments WebSphere Application Server is part of the story True HA requires planning beyond the software to all systems involved
  • 8. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 8 Important Areas for Performance High Availability Manager • WebSphere Network Deployment has been shipped with HA Manager since V6.0 Requires synchronisation between WebSphere processes at start up More CPU intensive and longer startup time as the core group size increases • Recommendation to restrict the size of a single core group to 50 or less High performance sites might require smaller core groups Instances may be marked down because of delayed heartbeat response Refer to IBM whitepaper “Best Practices for Large WebSphere Topologies” http://www.ibm.com/developerworks/websphere/library/techarticles/0710_largetopologies/071 0_largetopologies.html • Be careful before disabling HA Manager An number of WebSphere runtime services require HA Manager Data Replication Services (DRS) Singleton service failover (SIBus, peer-to-peer transaction log recovery) Workload management routing Refer to “When to use a high availability manager” http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd .doc/info/ae/ae/crun_ha_ham_required.html HA Manager can be disabled for each process in the cell http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd .doc/info/ae/ae/trun_ha_ham_enable.html
  • 9. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 9 Important Areas for Performance Service Integration Bus • Message reliability is a key factor in performance Less reliable transport incurs less overhead However, correctness and SLAs are key considerations • WAS supports file and database persistence Pick the mechanism best suited to your needs Performance, HA, skills, budget, etc. • Tuning potential in this layer Discardable and cached data buffers Default size is 320 KB Adjusting buffer sizes to avoid Message discard, or Overflow to persistent store BEST_EFFORT_NONPERSISTENT Messages are never written to disk throw away messages if memory cache over-runs EXPRESS_NONPERSISTENT Messages are written asynchronously to persistent storage if memory cache overruns, but are not kept over server restarts No acknowledgement that the ME has received the message RELIABLE_NONPERSISTENT Same as Express_Nonpersistent, except, we have a low level acknowledgement message that the client code waits for, before returning to the application with an OK or not OK response RELIABLE_PERSISTENT Messages are written asynchronously to persistent storage during normal processing, and stay persisted over server restarts. If the server fails, messages might be lost if they are only held in the cache at the time of failure. ASSURED_PERSISTENT Highest degree of reliability where assured delivery is supported Reliability Performance
  • 10. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 10 Important Areas for Performance Service Integration Bus • Different options available for message store Introduced in WebSphere Application Server V6.1 Flat file (new) JDBC database (traditional) • Flat file advantages Generally easier to setup Faster than database engine for shared footprints • Flat file HA participation Place file on highly available, shared drive GPFS, NFSv4, etc. Use IBM File System Locking Protocol Test for verification http://www-01.ibm.com/support/docview.wss?uid=swg21215152 • Messaging improvements in V7.0 More efficient internal memory management in this layer Gains in almost all areas (persistent, non-persistent, PTP, PubSub) Performance of larger messages also improved Server ME database JDBC filesystem SIBus
  • 11. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 11 Important Areas for Performance JDBC Resource Adapter • JDBC Resource Adapter uses a pool of connections to the database This pool can be highly contended in multithreaded applications Correct sizing of the pool can yield significant gains in performance • Connections in the pool can be monitored through PMI Watch for threads waiting on connections to the database The PMI metric “WaitTime” is the average waiting time in milliseconds If wait time is significant consider doing one of the following Increase the number of pooled connections in conjunction with your DBA Decrease the number of active threads in the system In some cases, a one-to-one mapping between DB connections and threads may be ideal • Database problems often manifest themselves as a large number of threads from your thread pool waiting for avaiable connections Deadlocks, lock timeouts, long-running SQL queries
  • 12. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 12 Important Areas for Performance JDBC Resource Adapter • Tune the Prepared Statement Cache Size for each JDBC data source Most application use Prepared Statements Especially when using persistency frameworks Hibernate JPA EJB entity beans The number of prepared statements that can be cached is important Each JDBC data source connection maintains its own individual cache Default of 50 prepared statements but you can override this Number of Prepared Statement discards can be monitored through PMI • Always use the latest JDBC driver for the database you are running Performance optimisation in this space between versions can be significant • Use a Type 4 (thin) JDBC driver where possible Keeps allocation of memory in the JVM native heap to a minimum
  • 13. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 13 Important Areas for Performance EJB 3.0 • EJB 3.0 Performance improvements over EJB 2.1 Compared to EJB 2.1 on V7.0 Improved developer experience Annotated POJOs More deployment options Example: Just-in-Time deploy feature • Caching flexibility with JPA “Eager” pre-loads related entities “Lazy” waits for a specific access before loading Pick the strategy that best fits the application’s use of data Caching unnecessary data increases memory footprint and GC overhead Repeated retrievals of same data increases processing cost (CPU, etc.)
  • 14. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 14 Important Areas for Performance Dynamic Cache Service • Policy Based caching (defined via the cachespec.xml configuration file) Servlet/JSP Commands Web Service Web Service Client – ( JAXRPC ) • API based caching Distributed Map Cacheable Servlet Cacheable Command • Distribution features Distribute cache contents to peer instances within a cluster Push content to HTTP servers and other “edge” components
  • 15. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 15 Important Areas for Performance Dynamic Cache Service • Ability to control the size of cache Previously only the number of cache entries was configurable Size of cache in bytes can now be configured Applies to both in-memory and disk cache Increase resilience and better manageability • Disk offload Allows cache to overflow to disk Optionally save/persist cache contents to disk upon stop Significant performance enhancements for disk offload in V6.1 “High performance” mode yields best performance but requires more memory Enhancements have been backported to older versions http://www.ibm.com/support/docview.wss?uid=swg24013097 • Servlet and Object Cache Instances Multiple cache instances within the same server Allows for more fine-grained control over cache entries • Enhanced cache monitor application http://www.ibm.com/developerworks/websphere/downloads/cache_monitor.html New in V7.0
  • 16. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 16 Important Areas for Performance WebSphere eXtreme Scale Dynamic Cache Provider • WebSphere eXtreme Scale can be used as Dynamic Cache Provider Supported from WAS 6.1.0.25 and WAS 7.0.0.5 No changes to the application required! • Can provide significant benefit to Dynamic Cache Service applications No more redundant data in clusters Better replication capabilities Can scale up to very large caches without relying on disk subsystems • Choice of two options WebSphere Extreme Scale 7.0 or 7.1 WebSphere DataPower XC10 appliance
  • 17. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 17 Important Areas for Performance Java Virtual Machine • Sizing the Java heap appropriately is key to good WebSphere performance Always keep the JVM(s) within the physical memory of the server • Determine good maximum heap size through testing Enable verbose:gc and analyse the native_stderr.log Use “Garbage Collection and Memory Visualizer (GCMV)” Tool available through IBM Support Assistant http://www.ibm.com/software/support/isa/ Other tools can be used as well Good starting point for maximum heap size 512 MB for WebSphere Application Server 1024 MB for WebSphere Process Server 1024 MB for WebSphere Portal Server • Determining minimum heapsize is usually easier Production systems set minimum heap size lower than the maximum Allows the JVM to determine the optimal heap size Can result in a more efficient object table Gives headroom for emergencies Some “Burst” scenarios might require setting the minimum equal to the maximum Avoids resizing the heap completely Generational GC on IBM Java 6.0 default nursery size is 25% of maximum heap size
  • 18. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 18 Important Areas for Performance Java Virtual Machine • WAS 6.0 IBM Java 1.4.2 JVM Flat memory model Generational model for selected 64 bit editions 32 and 64 bit editions • WAS 6.1 Introduction of IBM Java 5.0 JVM Flat or generational garbage collection Default is flat 32 and 64 bit editions Support for shared classes • WAS 7.0 Introduction of IBM Java 6.0 JVM Flat or generational garbage collection Default is flat 32 and 64 bit editions Compressed Reference Technology provides enhanced 64 bit performance Enhanced support for shared classes
  • 19. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 19 Important Areas for Performance Java Virtual Machine – IBM Java 1.4.2 JVM • Pinned and dosed objects are unmovable during compaction Example: Native calls generate pinned objects • Heap fragmentation could become an issue Especially true with large objects OutOfMemory occurs even if there appears to be sufficient free heap available • kCluster is an area of storage for class blocks Default size = 1280 (entries) x 256 (bytes) Can be reset via JVM argument –Xk<size> • pCluster is an area for pinned objects 16KB by default Newly created pCluster are 2KB in size Can be reset using but only on AIX when using the subpool GC policy -Xp<iiii>[K][,<oooo>[K]] iiii = initial pCluster size oooo = size of subsequent overflow pClusters AIX subpool GC policy
  • 20. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 20 Important Areas for Performance Java Virtual Machine – IBM Java 5.0 JVM • IBM implementation of Java 5.0 runtime Built to the Java 5.0 specifications • IBM Java 5.0 JVM manages JNI objects differently No pinned or dosed objects Reduces fragmentation • Two memory models available in IBM Java 5.0 JVM Flat model Similar to IBM Java 1.4.2 model Generational Similar idea as Sun and HP generational collectors Simplified model and tuning • Variety of garbage collection policies Optimizations for pause times or processing time Sub-pool support to reduce thread contention • Improved Just-in-Time (JIT) compiler Longer warm-up for optimized code No longer disabled for remote debugging Faster WAS startup times with remote debugger attached
  • 21. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 21 Important Areas for Performance Java Virtual Machine – IBM Java 6.0 JVM • IBM Java 6.0 introduced Compressed Reference (CR) technology Reduces the width of 64 bit Java heap references to 32 bits Implements an efficient bit shifting algorithm to accomplish this Exploits the fact that all references (pointers) are 8 byte aligned Can be used for heap sizes up to 28 GB WAS V7.0 enables this by default for heap sizes up to 25 GB • WebSphere Application Server 7.0 benefits from CR technology Reduces 64 bit WebSphere memory footprint back to 32 bit equivalent Typical memory footprint of 64 bit WebSphere 6.1/6.0 was 60-70% larger than 32 bit Reduces performance overhead of 64 bit WebSphere Performance is generally around 95% compared to 32 bit WebSphere Minimal performance cost is due to reference compression/decompression • Full details provided in an IBM whitepaper “IBM WebSphere Application Server WAS V7 64-bit performance – Introducing WebSphere Compressed Reference Technology” ftp://ftp.software.ibm.com/software/webserver/appserv/was/WAS_V7_64-bit_performance.pdf New in V7.0
  • 22. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 22 Important Areas for Performance Java Virtual Machine – IBM Java 6.0 JVM • IBM Java 6.0 further extends the support for shared classes IBM Java 5.0 introduced the sharing of common classes Reduction in startup time and memory footprint Useful when running many JVM processes on the same machine • Shared cache information can now be persisted to the filesystem Cache can survive a system restart Reduces startup time of WebSphere processes, even after a reboot! –Xshareclasses:persistent enables persistent cache (default except on z/OS) –Xshareclasses:nonpersistent disables persistent cache • Shared cache can now also stored Ahead of Time (AOT) compiled code The Just in Time (JIT) compiler generates AOT compiled code (native code) Speeds up execution of application code Compilation can now be avoided if the AOT compiled code is in the shared cache IBM Java 5.0 only allowed for sharing the static class data Example: –Xscmx50M –Xscminaot5M –Xscmaxaot10M Creates a 50MB cache, guaranteeing at least 5MB of space but no more than 10MB for AOT compiled code
  • 23. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 23 Important Areas for Performance Hardware and Operating System – Partitioning on System p • Partitioning capabilities are very powerful Static LPARs Dynamic LPARs Dynamic Micro-Partitioning WPARs • Beware of performance impact of Dynamic Micro-Partitioning Fractional amount of physical CPU resources can be assigned to LPARs Processor cache of physical CPU will be flushed frequently Frequent flushing of processor cache can significantly reduce performance for applications that are sensitive to the efficiency of these caches, for example: WebSphere Application Server DB2 Universal Database LPAR Physical Hardware Operating System LPAR Operating System LPAR Operating System Virtual CPU Physical CPU
  • 24. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 24 Important Areas for Performance Hardware and Operating System – Large Page support on System p • Large Page support on Power 4 or higher 16 MB memory pages instead of default 4 KB 10-15% better performance for processes that use a lot of memory For example WebSphere Application Server Large page support requires all memory pages in a 256 MB segment to be large pages Requires a change to OS configuration plus reboot • Medium Page support on Power5+ and higher 64 KB memory pages instead of default 4 KB Almost the same performance boost as 16 MB pages No need to change OS configuration • Full details in WebSphere information center http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.webs phere.nd.doc/info/ae/ae/tprf_tuneaix.html
  • 25. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 25 Best Practices from the Field
  • 26. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 26 Best Practices from the Field Outbound SOAP/HTTP Web Services bottleneck • Connections for outbound SOAP/HTTP Web Services calls are pooled Default size of this pool is set to 25 The reasoning is that this pool should never exceed the number of threads in the Web Container thread pool This might be valid when the Web Services client is running in the same JVM as the server Several scenarios where the above is not the case Pool can be sized through JVM custom property com.ibm.websphere.webservices.http.maxConnection Several other custom properties available to deal with timeouts, proxies, etc http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/info/ae/ae/ rwbs_httptransportprop.html • A bottleneck here can be easily seen from a threaddump Note method OutboundConnectionCache.findGroupAndGetConnection 3XMTHREADINFO "Default : 2030" (TID:0x0000008097D66900, sys_thread_t:0x0000008080C63520, state:CW, native ID:0x0000000000000612) prio=5 4XESTACKTRACE at java/lang/Object.wait(Native Method) 4XESTACKTRACE at java/lang/Object.wait(Object.java:231(Compiled Code)) 4XESTACKTRACE at com/ibm/ws/webservices/engine/transport/channel/OutboundConnectionCache.findGroupAndGetConnection(OutboundConnectionCache.java:317(Compiled Code)) 4XESTACKTRACE at com/ibm/ws/webservices/engine/transport/http/HTTPSender.invoke(HTTPSender.java:521(Compiled Code)) ... 4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/OnlineChannelFactoryITFSelectorImpl.send(Bytecode PC:22(Compiled Code)) ... 4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/JavaSRMRequestHandlerComponentImpl.handleRequest4Supplier(Bytecode PC:486(Compiled Code)) 3XMTHREADINFO "Default : 2030" (TID:0x0000008097D66900, sys_thread_t:0x0000008080C63520, state:CW, native ID:0x0000000000000612) prio=5 4XESTACKTRACE at java/lang/Object.wait(Native Method) 4XESTACKTRACE at java/lang/Object.wait(Object.java:231(Compiled Code)) 4XESTACKTRACE at com/ibm/ws/webservices/engine/transport/channel/OutboundConnectionCache.findGroupAndGetConnection(OutboundConnectionCache.java:317(Compiled Code)) 4XESTACKTRACE at com/ibm/ws/webservices/engine/transport/http/HTTPSender.invoke(HTTPSender.java:521(Compiled Code)) ... 4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/OnlineChannelFactoryITFSelectorImpl.send(Bytecode PC:22(Compiled Code)) ... 4XESTACKTRACE at be/fgov/kszbcss/soa/srm/sbrequesthandler/impl/JavaSRMRequestHandlerComponentImpl.handleRequest4Supplier(Bytecode PC:486(Compiled Code))
  • 27. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 27 Best Practices from the Field Configure Aggressive Hung Thread Detection Policy • WebSphere has a built-in detection mechanism for long running threads Logs a message to SystemOut.log if a thread has been active for over 600 seconds Detection logic runs once every 180 seconds • For OLPT workloads threads are not expected to run for more than a few seconds Default threshold of 600 seconds is excessive Long running threads keep hold of other resources so this condition should be avoided Common misunderstanding that transaction timeouts would stop a running thread Threshold can be overriden through a JVM custom property com.ibm.websphere.threadmonitor.threshold • WebSphere can also generate a threaddump when long running threads are detected Can be very helpful for root-cause analysis especially in production environments Can be enabled through a JVM custom property com.ibm.websphere.threadmonitor.dump.java • Full details available in the information center http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphere.nd.doc/i nfo/ae/ae/ttrb_confighangdet.html [18/02/10 13:05:14:377 GMT] 00000014 ThreadMonitor W WSVR0605W: Thread "WebContainer : 52" (00000113) has been active for 614687 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may be hung. [18/02/10 13:05:14:377 GMT] 00000014 ThreadMonitor W WSVR0605W: Thread "WebContainer : 52" (00000113) has been active for 614687 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may be hung.
  • 28. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 28 Best Practices from the Field High CPU Utilisation with WebSphere V7.0 on AIX • Customers observe high CPU utilisation when running WAS V7.0 on AIX There is a known defect in IBM Java 6.0 SR6 and SR7 on AIX The Java attach agent gets stuck in a loop trying to open a metaphore http://www-01.ibm.com/support/docview.wss?uid=isg1IZ73533 • The above problem can be confirmed from a thread dump as shown below • Problem has been resolved in IBM Java 6.0 SR8 Workaround is to disable the Java attach agent Use JVM custom property com.ibm.tools.attach.enable=no 3XMTHREADINFO "Attach handler" J9VMThread:0x00000000300A4400, j9thread_t:0x0000000111DA5DC0, java/lang/Thread:0x0000000040035980, state:CW, prio=6 3XMTHREADINFO1 (native thread ID:0x26600CD, native priority:0x6, native policy:UNKNOWN) 3XMTHREADINFO3 Java callstack: 4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.openSemaphoreImpl(Native Method) 4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.reopenSemaphore(IPC.java:182(Compiled Code)) 4XESTACKTRACE at com/ibm/tools/attach/javaSE/AttachHandler.waitForNotification(AttachHandler.java:181(Compiled Code)) 4XESTACKTRACE at com/ibm/tools/attach/javaSE/AttachHandler.run(AttachHandler.java:163(Compiled Code)) 3XMTHREADINFO3 Native callstack: 4XENATIVESTACK _event_wait+0x2b8 (0x090000000070985C [libpthreads.a+0x1685c]) 4XENATIVESTACK _cond_wait_local+0x4e4 (0x0900000000717568 [libpthreads.a+0x24568]) 4XENATIVESTACK _cond_wait+0xbc (0x0900000000717B40 [libpthreads.a+0x24b40]) 4XENATIVESTACK pthread_cond_wait+0x1a8 (0x09000000007187AC [libpthreads.a+0x257ac]) 3XMTHREADINFO "Attach handler" J9VMThread:0x00000000300A4400, j9thread_t:0x0000000111DA5DC0, java/lang/Thread:0x0000000040035980, state:CW, prio=6 3XMTHREADINFO1 (native thread ID:0x26600CD, native priority:0x6, native policy:UNKNOWN) 3XMTHREADINFO3 Java callstack: 4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.openSemaphoreImpl(Native Method) 4XESTACKTRACE at com/ibm/tools/attach/javaSE/IPC.reopenSemaphore(IPC.java:182(Compiled Code)) 4XESTACKTRACE at com/ibm/tools/attach/javaSE/AttachHandler.waitForNotification(AttachHandler.java:181(Compiled Code)) 4XESTACKTRACE at com/ibm/tools/attach/javaSE/AttachHandler.run(AttachHandler.java:163(Compiled Code)) 3XMTHREADINFO3 Native callstack: 4XENATIVESTACK _event_wait+0x2b8 (0x090000000070985C [libpthreads.a+0x1685c]) 4XENATIVESTACK _cond_wait_local+0x4e4 (0x0900000000717568 [libpthreads.a+0x24568]) 4XENATIVESTACK _cond_wait+0xbc (0x0900000000717B40 [libpthreads.a+0x24b40]) 4XENATIVESTACK pthread_cond_wait+0x1a8 (0x09000000007187AC [libpthreads.a+0x257ac])
  • 29. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 29 Best Practices from the Field Energy savings mode can limit performance • Modern platforms such as Power7 provide several energy saving options “Dynamic Power Saver” – reduces CPU frequency depending on worload “Static Power Saver” – reduces CPU frequency permanently • Make sure that “Static Power Saver” is not enabled when performance is vital Can be disabled through Hardware Management Console (HMC) Use “lparstat” to confirm this http://www.ibm.com/developerworks/wikis/display/Wiki Ptype/CPU+frequency+monitoring+using+lparstat lp_newpap_904_st: /home/wasadmin>lparstat –E 2 2 System configuration: type=Shared mode=Uncapped smt=4 lcpu=16 mem=34816MB psize=12 ent=4.00 --------Actual-------- freq ------Normalised------ user sys wait idle --------- user sys wait idle 12.56 2.510 0.249 0.681 2.1GHz[ 70%] 8.792 1.757 0.174 5.277 12.13 2.602 0.237 1.031 2.1GHz[ 70%] 8.491 1.821 0.166 5.522 lp_newpap_904_st: /home/wasadmin>lparstat –E 2 2 System configuration: type=Shared mode=Uncapped smt=4 lcpu=16 mem=34816MB psize=12 ent=4.00 --------Actual-------- freq ------Normalised------ user sys wait idle --------- user sys wait idle 12.56 2.510 0.249 0.681 2.1GHz[ 70%] 8.792 1.757 0.174 5.277 12.13 2.602 0.237 1.031 2.1GHz[ 70%] 8.491 1.821 0.166 5.522 Physical CPU vs Entitlement - lp_newpap_904_st 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 PhysicalCPU entitled 70%
  • 30. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 30 Best Practices from the Field Avoid Power7 LPARs that span multiple processor sockets • A single Power7 consists of multiple cores 4, 6 or 8 cores depending on model • LPARs that span multiple chips/sockets incur a performance overhead Numbers ??? • Recommendations Restrict the size of the LPAR to the number of cores per chip Ensure that all cores from that LPAR are assigned to the same chip $ lssrad -va REF1 SRAD MEM CPU 0 0 6180.69 24-31 1 1 498.00 56-63 $ lssrad -va REF1 SRAD MEM CPU 0 0 7695.19 0-31 1 1 236.25 32-63 $ lssrad -va REF1 SRAD MEM CPU 0 0 6180.69 24-31 1 1 498.00 56-63 $ lssrad -va REF1 SRAD MEM CPU 0 0 7695.19 0-31 1 1 236.25 32-63 Bad; Good;
  • 31. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 31 Best Practices from the Field Measuring and Monitoring • Obtaining meaningful system metrics from virtualized or partitioned environments • Measuring inside the VM image or partition does not represent the whole environment • Use the appropriate commands to view system resources vs. image/partition resources Micropartitioned environment might show 95% CPU utilization from some commands Looking at the broader environment The partition is using 95% of the CPU currently allocated The hypervisor has additional CPU in reserve pending allocation Thus the server has not exhausted its CPU yet LPAR Physical Hardware Operating System LPAR Operating System LPAR Operating System Virtual CPU Physical CPU
  • 32. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 32 Best Practices from the Field Do you really need 64 bit WebSphere? • Myth – “If the OS is 64 bit, we must use 64-bit WebSphere as well” • WebSphere 32 bit edition Supported for most 64 OS environments Check the support matrix for details Benefits from a smaller physical memory footprint This is true on both 32 and 64 bit OS Can more easily use the full 32 bit address space No limitations due to reserved address space for kernel or libraries • WebSphere 64 bit edition Allows JVM to grow well beyond 32bit process size boundaries Memory constrained applications can see substantial benefits Allows for more extensive use of caching The use of 64 bit registers benefits certain application code Security algorithms are a good example Performance penalty and Java memory overhead are relatively small on WAS V7.0 Compressed Reference is enabled by default for heap sizes up to 25 GB Native memory usage is still higher than 32 bit
  • 33. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 33 Summary • WebSphere performance spans many areas and features • High-end function and features provide performance when used appropriately • Hardware and OS selection play important roles in performance
  • 34. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 34 Questions?Questions? • Thank you for attending! Please complete your session evaluation, session number is W23
  • 35. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 35 Further Reading Performance Analysis for Java Web Sites Joines, Willenborg, Hygh IBM WebSphere Deployment and Advanced Configuration Barcia, et al IBM WebSphere System Administration Williamson, et al WebSphere Application Server: Step by Step Turaga, et al Persistence in the Enterprise Barcia, Hambrick, et al
  • 36. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 36 References • “Best Practices for Large WebSphere Topologies” http://www.ibm.com/developerworks/websphere/library/techarticles/0710_large topologies/0710_largetopologies.html • “IBM WebSphere Application Server WAS V7 64-bit performance – Introducing WebSphere Compressed Reference Technology” ftp://ftp.software.ibm.com/software/webserver/appserv/was/WAS_V7_64- bit_performance.pdf • IBM Support Assistant http://www.ibm.com/software/support/isa/ • Many thanks to Stacy Joines, John Stecher and Chris Blythe for their contributions to this presentation
  • 37. WebSphere Technical Conference and Portal Excellence Conference © 2010 IBM Corporation 37 Other Sessions • W08 – IBM WebSphere eXtreme Scale: Introduction • W17 – WAS 7 and Java 6 Performance Tuning • W21 – Service Integration Bus in WAS 7.0