FAQ
Dear All,

We have been struggling a lot about TDS & ODBC performance over high-latency
connections.

We're using MS SQL 2000 and unixODBC on Gentoo Linux to connect to it.

Our problem is that we have a single database that is being accessed by
multiple application servers all over the world. Sometimes network latency
between an appserver and the database could reach one second!!!

The basic problem we're having is that, say, if I call a function on an
appserver that has high-latency to the database, but low-latency to the
client, it's *slower*, than if I would be calling the remote app server,
that has low latency to the database, but high latency to the client.

Basically, to illustrate, this:

(cli) -> (app) -> < latency > -> (database)

is slower than this:

(cli) -> <latency> -> (app) -> (database),

which doesn't make any sense to me, because the size of data transferred
between the app and DB should be no higher than the size of data transferred
from an appserver to a client (we use XMLRPC to do function calls on the
appservers).

We don't use bind, we just call sored procedures that return large record
sets (possibly multiple ones).

The only reason for this behavior I could see is that if TDS does
roundtrips to fetch the data... that's my guess... Im going to sit down with
Wireshark to see what's going on exactly, but meanwhile....

Does anyone have any experience with that, or have any good pointers to
documentation / info on that? I wasn't able to find anything decent on the
Net :(

Thanx in advance!

Fi.

Search Discussions

  • Michael Higgins at Mar 30, 2008 at 6:11 pm

    On Sat, 29 Mar 2008 06:03:01 +0300 "Fi Dot" wrote:

    Dear All,

    We have been struggling a lot about TDS & ODBC performance over
    high-latency connections.
    I have as well.
    We're using MS SQL 2000 and unixODBC on Gentoo Linux to connect to it.
    So am I. Just like that.
    Our problem is that we have a single database that is being accessed
    by multiple application servers all over the world. Sometimes network
    latency between an appserver and the database could reach one
    second!!! [8<]
    The only reason for this behavior I could see is that if TDS does
    roundtrips to fetch the data... that's my guess... Im going to sit
    down with Wireshark to see what's going on exactly, but meanwhile....
    Setting the freeTDS and unixODBC log locations and levels might help
    too. Or not.
    Does anyone have any experience with that, or have any good pointers
    to documentation / info on that? I wasn't able to find anything
    decent on the Net :(
    That was my experience as well.

    I would suggest the freeTDS and unixODBC lists may be a place to
    explore and seek help. I'm very curious as to what you may find, too.

    For me, I'd say that it may be the freeTDS implementation, or unixODBC
    that is at fault, somehow. I often have timeout issues with the
    utilities that come with these programs when I use them from a remote
    location.

    Then again, it seems like the problem mostly goes away when the load on
    the server is very low. Honestly, I don't have a clue.

    Fortunately for me there is no latency between my appserver and the
    database, but I'm stuck with that setup until I can figure out the
    timing issues.

    Cheers,

    --
    \ /| | | ~ ~
    \/ | |---| `|` ?
    ichael | |iggins \^ /
    michael.higgins[at]evolone[dot]org
  • Tim Bunce at Mar 30, 2008 at 10:49 pm

    On Sat, Mar 29, 2008 at 06:03:01AM +0300, Fi Dot wrote:

    Our problem is that we have a single database that is being accessed by
    multiple application servers all over the world. Sometimes network latency
    between an appserver and the database could reach one second!!!

    The basic problem we're having is that, say, if I call a function on an
    appserver that has high-latency to the database, but low-latency to the
    client, it's *slower*, than if I would be calling the remote app server,
    that has low latency to the database, but high latency to the client.
    which doesn't make any sense to me, because the size of data transferred
    between the app and DB should be no higher than the size of data transferred
    from an appserver to a client (we use XMLRPC to do function calls on the
    appservers).
    When dealing with latency the number of round-trips is almost always
    much more significant than the size of the data transferred.
    We don't use bind, we just call sored procedures that return large record
    sets (possibly multiple ones).

    The only reason for this behavior I could see is that if TDS does
    roundtrips to fetch the data... that's my guess... Im going to sit down with
    Wireshark to see what's going on exactly, but meanwhile....

    Does anyone have any experience with that, or have any good pointers to
    documentation / info on that? I wasn't able to find anything decent on the
    Net :(
    You might find DBD::Gofer useful.

    Tim.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupdbi-users @
categoriesperl
postedMar 29, '08 at 3:03a
activeMar 30, '08 at 10:49p
posts3
users3
websitedbi.perl.org

People

Translate

site design / logo © 2022 Grokbase