I'm trying to get into the business of inspecting my JVM using the
mbeans that Tomcat (and the JVM) expose, but I'm having great
difficulty. I'm not even sure if I'm doing the right things.

I have a running Tomcat instance (happens to be 7.0.25 running on Sun
JRE 1.6.something on Mac OS X Lion), and I can happily connect to it
using jconsole (using a process id) with no special startup parameters
for Tomcat itself.

If I understand this correctly, it's because when the JVM starts up,
it allows localhost JMX inspection with no authentication using the
"Attach" API [1].

I have enabled neither Tomcat's
mbeans.GlobalResourcesLifecycleListener not the
JmxRemoveLifecycleListener [2].

I can see a wealth of information about the JVM and Tomcat in this

Now I'm trying to get similar information using a command-line tool
that is very simple called check_jmx -- it's a plug-in for Nagios. It
appears that this tool does not support the "attach" API and so it
looks like I'll have to enable "remote JMX", so I've followed the
instructions on Tomcat's monitoring page to enable remote JMX [3]:

$ export CATALINA_OPTS='\
- -Dcom.sun.management.jmxremote \
- -Dcom.sun.management.jmxremote.port=1234 \
- -Dcom.sun.management.jmxremote.ssl=false \
- -Dcom.sun.management.jmxremote.authenticate=false'

When running Tomcat, I can see that those parameters are actually part
of the command line. I can also see that the port is bound:

$ netstat -an | grep 1234
tcp46 0 0 *.1234 *.*
tcp4 0 0 *.*
tcp6 0 0 ::1.12345 *.*

I can use this tool with a command line such as the following:

$ ./check_jmx -U service:jmx:rmi:///jndi/rmi://localhost:1234/jmxrmi \
-O java.lang:type=Memory -A HeapMemoryUsage -K used \
-I HeapMemoryUsage -J used -vvvv -w 4248302272 -c 5498760192


Without going into too much detail, those options allow check_jmx to
probe a certain value, then use the -w and -c options to determine
whether the value obtained from the JMX server is okay, in the warning
zone, or in the critical zone, and then builds a text response based
upon that.

So, now I want to move to my server which happens to be an Amazon EC2
instance with an elastic IP assigned to it so I can get to it from the
outside. Most ports are blocked. I try the same type of thing: set up
the com.sun.management.etc parameters and launch Tomcat. netstat reports:

$ netstat -plan | grep 1234
tcp6 0 0 :::1234 :::*
LISTEN 2819/java

So I should be good to go. I launch the command the same as on my
local machine, but this time it hangs and then I get an exception:

$ ./check_jmx -U service:jmx:rmi:///jndi/rmi://localhost:1234/jmxrmi
- -O java.lang:type=Memory -A HeapMemoryUsage -K used -I HeapMemoryUsage
- -J used -vvvv -w 4248302272 -c 5498760192
JMX CRITICAL Connection refused to host: [public IP]; nested exception
java.net.ConnectException: Connection timed out connecting to
java.lang:type=Memory by URL
Connection refused to host:; nested exception is:
java.net.ConnectException: Connection timed out
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110)
at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown
at org.nagios.JMXQuery.connect(JMXQuery.java:53)
at org.nagios.JMXQuery.main(JMXQuery.java:75)
Caused by: java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
at java.net.Socket.connect(Socket.java:529)
at java.net.Socket.connect(Socket.java:478)
at java.net.Socket.(Socket.java:189)
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595)
... 10 more

So, [public IP] is actually my public, IPv4 address. I figure the
problem might be IPv4-versus-IPv6, but before I go "preferring" the
IPv4 stack, I check netstat during the execution of check_jmx:

$ netstat -plan | grep 1234
tcp6 0 0 :::1234 :::*
LISTEN 2819/java
tcp6 0 0
tcp6 0 0

(pid 2819 is my Tomcat process, and 2950 is check_jmx). So, what else
is check_jmx doing?

$ netstat -plan | grep 2950
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6 0 0
tcp6 0 1
SYN_SENT 2950/java

So, check_jmx is actually successfully establishing a socket with the
JMX port but it looks like it's doing something weird with the
(random) RMI port. First, it's trying to use the public IP address
instead of just localhost (which seems stupid, but that's probably
what the JMX service told it to do). Second, it's calling-out on the
internet IP address, which is probably a result of using a
non-localhost destination IP address.

So, a couple of questions:

1. Is there a way to tell the JVM which interface to use for the
second, random RMI port that gets opened? I think if everything runs
on :: or then all will probably be well.

2. Should I just give up and use JmxRemoteLifecycleListener? Will that
actually buy me anything? I will always be running check_jmx
locally. I'm not s huge fan of having to deploy with an optional
library tossed-into Tomcat's lib direcory (even though it's actually
from the find Apache folks).

3. Should I just give up and use the manager app's jmxproxy? I don't
currently deploy the manager app, and I'd like to avoid doing that
if possible. But, it may be a slightly cleaner solution.

4. Should I hack the code for check_jmx to use the Attach API and
try to avoid all of this stupid port business? Getting the PID
of the Tomcat process shouldn't be hard as long as I use
CATALINA_PID and get the value from there.

Those playing along at home might remember that I recently posted a
question about Kitty, another JMX CLI program. The documentation for
Kitty has the same type of ugly RMI URLs in it, so I suspect I'll have
the same problem with another tool: I think this is squarely a
server-process configuration issue that I have to deal with.

Any suggestions would be most appreciated.

- -chris




To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 14 | next ›
Discussion Overview
groupusers @
postedJan 26, '12 at 6:32p
activeJan 31, '12 at 9:30p



site design / logo © 2018 Grokbase