Defining connections in one place is one thing.
Using the same connection from different places is another.
The first point, defining connections in one place, is something
quite trivial that is unnecessary to discuss. Basically it means
specifying the dsn string, and that can be done in a dozen ways
without a problem.
However, taking a connection that was created in one central place
and using it in a bunch of other places is a more complicated issue.
The situation depends on your setup.
Say that the creator of the connection is code A, and that code B and
C use that connection.
Code A, B and C reside inside the same process, i.e. are run by the
same Perl interpreter inside the same process. For example, you could
have a mod_perl process initializing a connection (A) and then
calling that connection each time a page is requested (B, C, etc...).
The only problem in this case is that when the connection goes down,
any subsequent use of the connection fails.
Create something like DBIx::HA to reconnect seamlessly by utilizing
the swap_inner_handle() method of DBI. Basically if you see that the
connect goes down, you create a new one and swap its inner handle
with the old one, effectively making the original one active again,
unbeknownst to the codebase. So say A connects, then B uses it
successfully, then it goes down and C sees a failure. C does the
swap_inner_handle magic and reuses the connection, and after that B
can use it again without noticing anything happened.
In simpler words, assume you have a global $dbh and at some point it
becomes bad. You create a local $dbh2, connect, and when you're happy
with it, you swap_inner_handle between $dbh and $dbh2. From that
point on, $dbh is good and $dbh2 is bad. Then you can keep going with
your code that uses the $dbh global variable. It's perfectly elegant.
Code A, B and C do NOT reside inside the same process. That's much
more difficult. As Ross says below, you can cache the database handle
in some way. Or you can use DBI::Gofer. One way to think about
DBI::Gofer is that you have a central process (think of it as a
daemon) that runs all the queries, and you connect to it from your
script, give it the sql you want, and it gets you back your data. In
effect it's a centralized DBI "server". So Code A resides inside that
server, and code B and C use DBI::Gofer to communicate with the DBI
server. You can communicate using a number of protocols (stream,
http, ssh...) so you can have that DBI server in the same machine or
on another machine (or cluster).
On Aug 1, 2007, at 11:11 PM, John Costello wrote:
On Tue, 31 Jul 2007, Russ wrote:
Over time the need to connect to the same database in different
scripts became apparent. To do this I began by creating a package
that contained all the $dbh handles (not advisable) as if one fails
then your whole script is no good. So I then split them into
individual packages for each connect handler. Again not good if you
need to connect to multiple databases in your script, but aren't
dependent on all of them having to work.
So is there a good way to manage database connections only. So that
multiple scripts can use the same connection but it is defined in
one place. Should I explore DBI::Ingres. or is there another easier
way, other than defining the connection in each script I create.
If I understand right, you want to create you database handle
then have multiple scripts use that same $dbh, right? If so, read
not, the talks linked below may still be useful.
Tim talked about caching database handles (and other things) and their
gotchas in his Advanced DBI talk, which he posted to the list earlier.
Two things come to mind off the top of my head, both really based
talks, which I recommend that you peruse:
One approach would be to have a parent script that creates the
handle and then manages all of the other scripts. This assumes
things about what you are doing, and may not be practical to you. I
suppose you could launch a separate script that maintains the $dbh and
allows the other scripts to communicate to it.
Another approach is that the scripts are called from Apache, then
that it is possible to share the $dbh between processes.
I'm still looking at DBI::Gofer, and can't say with confidence that it
would work for this situation.