steve rock wrote:
What I have done is to create a RequestContextFilter that implements >
init(), doFilter(), and destroy() on the javax.servlet.Filter interface. >
The destroy() method gets called after every reqeust to my app. Here I
close the db connections. So I keep the connection open for the whole
request. This makes it so I don't have to close the connection in any
other part of my code. We have this running in production and works
real well. >
This is configured as a <filter> in my web-app.xml and the
<filter-mapping> is also defined. >
We tried keeping the connection open for a whole jsp session but this
didn't work well. We had problems deleting PB because we kept getting
errors like "Cannot delete, an object with this id already exists in
session.clean solves that problem, the best approach however is that
you should not care for session handling yourself, but have that covered
with a proven session handler class.
Spring has something in their framework, Hibernate3 finally has
that on Hibernate level as well.
The only "problem" to closing db connections for every request is that
if you have a persistant bean that you use across several pages and
several requests is that any fields in your PB that are lazy loaded,
i.e. object references to other peristant beans, you will get
LazyInstantiationExcpetion. This is because after the first request
your PB becomes a detached persistance object. You need to know the
hibernate lifecycle of persistant beans to understand this. Read
"Hibernate In Action" or look at the web site documentation.
The other problem of constant closing is that you might run into
performance problems because closing a session means basically that you
constantly close the connections instead of keeping a connection pool
and share/recycle them.
I really can recommend to check out spring for that one, or move to
Hibernate3 (or backport the class for Hibernate2)
As for the servlet filter approach, this one has a nasty sideffect many
people stumble upon with a wrong session handling algorithm.
The approach basically is the correct way to do it, but you have to be
aware of following:
a) You allocate the resources on the servlet init stage
b) you deallocate the resources on the servlet destroy stage
But at a) usually a session is opened
You do your db stuff, your objects stay in the session
at a later stage within the same request you recycle the session without
noticing and pump in an object from outside
you have a high chance, to get an hibernate error which tells you that
the object already is bound to the session
You also have a high chance if you do a session.clear in between that
you get unwanted sideeffects at a later stage
There are two ways to work around this if you use request demarkation
(which is the best way to do)
Either you know your algorithmic flow and you recycle the session and be
careful about cleans and your objects
Or open new sessions to the main session (which you also can do) if needed.
There are 2 ways to solve this. >
1. specify eager fetching in your HQL. Return all the fields you need
in the first request. Once the object becomes detached from the db in
the second request, it will not execute any sql to fetch any
non-loaded fields due to lazy loading. >
2. If you still want to use the lazy loading features and not do an
eager fetch in the first request, reattach your detached persistant
bean to the database in the second request. This simply means in you
code call >
session.lock( yourPersistantBean, LockMode.NONE) (there are other lock
modes). Then you can access the fields that are lazy loaded and not
throw the exception..
session.lock only is suitable for reading, for writing do not lock,
hibernate throws you an error or simply does nothing, despite what the
doc says (at least hibernate 2).
A session.<fill in your op here> does the trick in this case,
that means session.update on a non bound db object should update
the object into the db correctly.
However if the object is bound by session.lock before it happened to
me in Hibernate2 that the hibernate subsystem basically went through the
update without an error but did nothing.
Somehow the lock must also lock out an update within the same session on
the same object (no matter which lockmode you use) and does not throw an
For pure write access, do not lock the object use the internal
versioning system instead for avoiding version/soft update conflicts.
If you work with entire object trees I can recommend to use cascade as
much as possible, at least for 1:1 1:n relations an automatic update on
a cascaded tree works very well, for pure m:n relations over binding
tables which are mapped directly to pure m:n object relations, at least
hibernate 2 had its share of problems the last time I used that, I had
to revert to manual updating, which itself over so many objects and some
session.flushes and clears in between caused conflicts with the
Sorry for becoming that long, but there are details which are neither
covered by the Hibernate docs nor by other documentation to full detail.
I just wanted to add all this info so that others do not have to fight
with those kind of things like I had.