webMethods Memory Management and troubleshooting for WebMethods Servers
Jul 18, 2012 15:30 0 Comments Administration Raj Kumar

Memory Management And Troubleshooting For WebMethods Servers

A process in a 32 bit system only has access to 4GB of address space. Out of this, 1 GB is allocated to kernel space and 3 GB to user space in Linux based systems and 2GB/2GB for Windows based systems. See reference section for ways of expanding user space with both operating systems (OSs). Even with 3GB space in Linux (Redhat Enterprise 4) , access to Java object heap is limited to 1.6GB or 1.7GB while the rest is dedicated to non heap memory. You may able to start a server with Java heap as high as 2.3GB but don’t expect the application to last long. Due to various activities in non heap memory space such as GC (Garbage Collection), Threads, byte code to native compilations etc., a server is likely to crash with a native memory error. Therefore, you need to strike a balance between adequate memory for the application to run without running out of heap memory and JVM to perform house keeping tasks without exhausting native memory space.

Here are some guidelines on managing memory requirements for webMethods servers (IS and MWS, 7.1.2) on 32 bit operating systems:

  • Know your application memory requirements. This is very important because people generally prefer to give as much memory as possible to Java heap. After all, giving more memory, surely will benefit performance. Right? Not quite. As outlined above, we are juggling with 3 / 2 GB of memory on a 32bit system and more heap memory means less native memory. Therefore, it is important to monitor the memory requirements for the application and allocate accordingly.


  • Use server class hardware to run servers. Hotspot server system is automatically detected on server class machines. The server system is optimal where overall performance is critical and client system is suitable for applications which require fast startup times or small footprints. Differences between the two systems include the compilation policy, heap defaults, and inlining policy. For instance, compiler threshold is 10000 for server mode and 1500 for client mode. The high threshold for server system is required for optimal optimization


  • Use JVM options. Use them with caution as they may have an adverse effect for inappropriate usage. Some useful options:
    • Java Heap size: -Xms and -Xmx
    • Perm memory: -XX:MaxPermSize
    • Young generation heap size: -XX:NewRatio
    • GC algorithm: e.g., -XX:+UseConcMarkSweepGC, -XX:+UseParNewGC
    • Dump heap on memory error: -XX:+HeapDumpOnOutOfMemoryError
    • Dump file: -XX:HeapDumpPath=xxx.hprof
    • Log GC (can view the output using GCViewer): -Xloggc:gc.log
    • Add time stamp to GC output: -XX:+PrintGCDetails


  • Start servers using nohup (Linux). Errors such as native memory errors are not logged in application logs because the application never gets a chance to log them as the JVM is terminated prematurely. A server started with nohup command may able to log the cause for the the crash in nohup.out. However, for detailed errors, you should enable JVM with debugging options as discussed above. With MWS, use nohup on run.sh and not startup.sh as the latter redirects the output to PORTAL_CONSOLE variable defined in mws.sh


  • Restart may help. If the application is running just under the maximum limits, then a restart can help to avoid a crash. A restart resets compiled code and data gathered by Hotspot compiler for optimizations, thus effectively freeing up the native memory


  • Disable JIT compiler (-Djava.compiler=NONE). This effectively runs the application in interpreter mode and hence the least efficient way to run an application. However, this could be a viable option for some applications where performance is not critical, for example nightly batch job (hopefully, it will end before the day break  )


  • Beware of third party libraries. Uncaught errors in third party libraries could potentially crash the JVM. With java.nio package, it’s possible for a Java program to access Java native memory without using JNI, for example, java.nio.ByteBuffer.allocateDirect(…). If you suspect a JVM crash is due to a third party tool which is beyond your control, it’s worth running it in a separate JVM with a defined interface (e.g., webservice, socket protocol etc)


  • 64-bit runtime. 64-bit OS can go a long way to address the memory issues but various posts highlighted 40%-50% increase in RAM to hold the same amount of data. This is mainly due to:
    • Extended instruction set
    • Increased object size (64-bit pointers)
    • Increased stack size
  • Monitoring tools. Use monitoring tools such as Hyperic to monitor JVM. If none is available, consider jconsole provided as part of JDK 5 or above as an alternative. To enable JMX capability, start JVM with the following option:
    java … -Dcom.sun.management.jmxremote.port=3333

Modify server.sh (IS) and mws.sh (MWS) accordingly. visualjvm provided as part of JDK 6 is another option worth considering. This tool relies on jstat daemon to autodetect JVMs running on a host. To start jstat daemon:

./jstatd -J-Djava.security.policy=jstatd.all.policy &

Other tools worth considering are: 

    • jmap, e.g. jmap , jmap –heap
    • pmap (Linux)
    • top (Linux)


Prev Next
About the Author
Topic Replies (0)
Leave a Reply
Guest User

You might also like

Not sure what course is right for you?

Choose the right course for you.
Get the help of our experts and find a course that best suits your needs.

Let`s Connect