FAQ
Hi,
I have been watching the thread about the file::copy. I ran into an
issue in the Linux environment that brings a serious question, MAX file
size. Keep in mind the server is running 7.0 RH, we have 7.2 Enterprise
Server also, and we pay for support. But even the RH support says they
can't handle files in excess of 2GB (approx). I was using TAR, GZIP, or
most any functions, I have found that the targeted file is only 1.8GB
instead of being a much larger file, in our case 16GB. This was on a
"/mnt" device, not a local disk. So the COPY (TAR in this case) was from
one "/mnt/" device to another, it did not matter if I used TAR, COPY,
MOVE, or a Perl program, same problem.

Everyone I talked to about this on the various "Groups" only said
"Rebuild the kernel using 64 bit support", but this is on an Intel box
(32 bit?). Have any of YOU seen this problem? I can't be the only person
dealing with large files. Ideas?? How is this issue on later releases??

THanks.
--
Rich Parker

Search Discussions

  • Wiggins at Aug 20, 2003 at 3:31 pm
    ------------------------------------------------
    On Tue, 19 Aug 2003 14:20:47 -0700, Rich Parker wrote:

    Hi,
    I have been watching the thread about the file::copy. I ran into an
    issue in the Linux environment that brings a serious question, MAX file
    size. Keep in mind the server is running 7.0 RH, we have 7.2 Enterprise
    Server also, and we pay for support. But even the RH support says they
    can't handle files in excess of 2GB (approx). I was using TAR, GZIP, or
    most any functions, I have found that the targeted file is only 1.8GB
    instead of being a much larger file, in our case 16GB. This was on a
    "/mnt" device, not a local disk. So the COPY (TAR in this case) was from
    one "/mnt/" device to another, it did not matter if I used TAR, COPY,
    MOVE, or a Perl program, same problem.

    Everyone I talked to about this on the various "Groups" only said
    "Rebuild the kernel using 64 bit support", but this is on an Intel box
    (32 bit?). Have any of YOU seen this problem? I can't be the only person
    dealing with large files. Ideas?? How is this issue on later releases??
    I am no kernel hacker so take what I say with a grain of salt. The large file size has to do with the addressable space on the disk which to support over 2 gigs you need more "bits" to produce longer addresses, which is I believe why they suggested you add 64 bit support. Its been a while since I was doing kernel builds but I thought there was a specific switch for "large file size", but I thought this was specifically to support partitions of larger than 2 GB not files themselves, but maybe they are one in the same.

    Now you mention that the file is 1.8 GB, is that machine readable or human readable, aka is that where 1 KB = 1000 bytes or 1024 bytes? It is likely that your file exceeds the 2 GB boundary if the 1.8 is human readable.

    I am not sure about copy, theoretically it should work if the file can be addressed completely, move won't work accross file system boundaries anyways, nor will a 'rename' in Perl. Again because Perl is talking to the underlying kernel theoretically you would need large file support in the kernel first, but then you *ALSO* need it in the 'perl' (not Perl) executable. For instance, perl -V will have something near the bottom like:

    Compile-time options: ... USE_LARGE_FILES ...

    Though I am also not a Perl internals hacker so I don't know what all this adds, but I suspect it is needed in your case if you do use a Perl script.

    To my knowledge this has been fixed in 2.4 or newer kernels (are you running 2.2?), or it was fixed by default from the jump from RH 7.x to RH 8.0.

    Maybe one of the real gurus can provide better explanation/help...

    In any case you may get better help asking on a Linux kernel list...

    http://danconia.org
  • Rich Parker at Aug 21, 2003 at 1:21 am

    ------------------------------------------------
    On Tue, 19 Aug 2003 14:20:47 -0700, Rich Parker wrote:

    Hi,
    I have been watching the thread about the file::copy. I ran into an
    issue in the Linux environment that brings a serious question, MAX file
    size. Keep in mind the server is running 7.0 RH, we have 7.2 Enterprise
    Server also, and we pay for support. But even the RH support says they
    can't handle files in excess of 2GB (approx). I was using TAR, GZIP, or
    most any functions, I have found that the targeted file is only 1.8GB
    instead of being a much larger file, in our case 16GB. This was on a
    "/mnt" device, not a local disk. So the COPY (TAR in this case) was from
    one "/mnt/" device to another, it did not matter if I used TAR, COPY,
    MOVE, or a Perl program, same problem.

    Everyone I talked to about this on the various "Groups" only said
    "Rebuild the kernel using 64 bit support", but this is on an Intel box
    (32 bit?). Have any of YOU seen this problem? I can't be the only person
    dealing with large files. Ideas?? How is this issue on later releases??

    I am no kernel hacker so take what I say with a grain of salt. The large file size has to do with the addressable space on the disk which to support over 2 gigs you need more "bits" to produce longer addresses, which is I believe why they suggested you add 64 bit support. Its been a while since I was doing kernel builds but I thought there was a specific switch for "large file size", but I thought this was specifically to support partitions of larger than 2 GB not files themselves, but maybe they are one in the same.

    Now you mention that the file is 1.8 GB, is that machine readable or human readable, aka is that where 1 KB = 1000 bytes or 1024 bytes? It is likely that your file exceeds the 2 GB boundary if the 1.8 is human readable.

    I am not sure about copy, theoretically it should work if the file can be addressed completely, move won't work accross file system boundaries anyways, nor will a 'rename' in Perl. Again because Perl is talking to the underlying kernel theoretically you would need large file support in the kernel first, but then you *ALSO* need it in the 'perl' (not Perl) executable. For instance, perl -V will have something near the bottom like:

    Compile-time options: ... USE_LARGE_FILES ...

    Though I am also not a Perl internals hacker so I don't know what all this adds, but I suspect it is needed in your case if you do use a Perl script.

    To my knowledge this has been fixed in 2.4 or newer kernels (are you running 2.2?), or it was fixed by default from the jump from RH 7.x to RH 8.0.

    Maybe one of the real gurus can provide better explanation/help...

    In any case you may get better help asking on a Linux kernel list...

    http://danconia.org
    You have a very good point, I've seen that "LARGE_FILES" thing in the
    set up, however, the people at RedHat said not to do that, but rather
    wait for the next release of the 2.4 kernel, at that time (About 6
    months ago) 2.4 was real "Buggy" according to them. Yet the current
    "Advertised" release of RH is 9.0!! Which makes me wonder about it, the
    stuff you can pay "Support" for is way back on the release scale. Here
    at work we also have a S/390 running VM and I've been trying to get the
    "Powers at be" to allow me to use the Linux and all of the things that
    go with that, gee, like PERL, but it has been a real up hill battle. If
    any of you can give me a GREAT reason to help me convince them, then I'm
    "All ears". I can see the "Bennies" of having a whole bunch of servers
    on ONE box, but it's very difficult to get them to the next step, $30K
    for TCP/IP for VM, which we would need. But then that 2GB limit hits me
    square in the face again. To answer your question about the 1.8, YES,
    when I use ANY piece of software, or do an LS, for example, it only
    shows 1.8GB when on the WinNT machine where the files sits, it shows
    16GB, for example. Didn't matter which piece of software or what
    "command" I was using. I don't think I would see this if I was using
    Perl in a Win32 arena, but with all of the troubles I had pushing huge
    amounts of SQL data through the cgi interface, I had to abandon the
    Win32 for the more stable and less "Buggy" Linux, but then I ran into
    the 2GB limit. Looks like WE have to wait until the Enterprise edition
    gets the newer kernel, agreed? But I HATE waiting... Cal me impatient...

    Thanks...



    --
    Rich Parker
  • David at Aug 20, 2003 at 8:20 pm

    Rich Parker wrote:

    Hi,
    I have been watching the thread about the file::copy. I ran into an
    issue in the Linux environment that brings a serious question, MAX file
    size. Keep in mind the server is running 7.0 RH, we have 7.2 Enterprise
    Server also, and we pay for support. But even the RH support says they
    can't handle files in excess of 2GB (approx).
    i believe RH 7.1 beta r1 (code name Fisher) which uses kernel 2.4.0 is the
    first RH that supports the LFS (Large File Support) extension. your server
    running 7.0 won't be able to address > 2GB file. If you have 7.2
    Enterprise, why don't you use that instead? If you pay for support, isn't
    RH suppose to provide help / instruction on how to get your 7.0 with LFS
    support?

    if you simply want to know if Perl is able to deal with > 2GB file, you can:

    [panda]$ perl -V | grep 'uselargefiles'

    and you should see something like:

    uselargefiles=define

    to see that if perl (the binary) is compiled to use LFS API, use:

    [panda]$ perl -V | grep 'OFFSET_BIT'

    and if you see something like:

    D_GNU_SOURCE -fno-strict-aliasing -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64

    you are in good shape. the easiest solution to get LFS support is to upgrade
    your 7.0 imo.

    david
  • Rich Parker at Aug 20, 2003 at 11:20 pm

    Hi,
    I have been watching the thread about the file::copy. I ran into an
    issue in the Linux environment that brings a serious question, MAX file
    size. Keep in mind the server is running 7.0 RH, we have 7.2 Enterprise
    Server also, and we pay for support. But even the RH support says they
    can't handle files in excess of 2GB (approx).

    i believe RH 7.1 beta r1 (code name Fisher) which uses kernel 2.4.0 is the
    first RH that supports the LFS (Large File Support) extension. your server
    running 7.0 won't be able to address > 2GB file. If you have 7.2
    Enterprise, why don't you use that instead? If you pay for support, isn't
    RH suppose to provide help / instruction on how to get your 7.0 with LFS
    support?

    if you simply want to know if Perl is able to deal with > 2GB file, you can:

    [panda]$ perl -V | grep 'uselargefiles'

    and you should see something like:

    uselargefiles=define

    to see that if perl (the binary) is compiled to use LFS API, use:

    [panda]$ perl -V | grep 'OFFSET_BIT'

    and if you see something like:

    D_GNU_SOURCE -fno-strict-aliasing -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64

    you are in good shape. the easiest solution to get LFS support is to upgrade
    your 7.0 imo.

    david
    David,
    Thanks for the observations. But the point that RH made to me (Again,
    about 6 months ago) was that this issue was in 7.0-7.2 and YES, they did
    not recommend rebuilding the kernel because at THAT time the 2.4 was
    "Buggy" in their opinion. I am not so worried about Perl being able to
    READ/WRITE or what ever for large files, but the O/S has to be able to
    do this first, correct? That's my main point and the obstacles I've run
    into on this. It seems like to me, it is time to revisit this with the
    RH folks to see what THEY say about it, then go through the pain of
    upgrading a server with a ton of perl code on it, of course, everything
    must be TESTED to make 100% sure I haven't dropped anything through the
    cracks.

    Thankx...

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupbeginners @
categoriesperl
postedAug 20, '03 at 6:22a
activeAug 21, '03 at 1:21a
posts5
users3
websiteperl.org

3 users in discussion

Rich Parker: 3 posts Wiggins: 1 post David: 1 post

People

Translate

site design / logo © 2022 Grokbase