FAQ
Hi,

Overview
=======

I'm doing some simple file manipulation work and the process gets
"Killed" everytime I run it. No traceback, no segfault... just the
word "Killed" in the bash shell and the process ends. The first few
batch runs would only succeed with one or two files being processed
(out of 60) before the process was "Killed". Now it makes no
successful progress at all. Just a little processing then "Killed".


Question
=======

Any Ideas? Is there a buffer limitation? Do you think it could be the
filesystem?
Any suggestions appreciated.... Thanks.


The code I'm running:
==================

from glob import glob

def manipFiles():
filePathList = glob('/data/ascii/*.dat')
for filePath in filePathList:
f = open(filePath, 'r')
lines = f.readlines()[2:]
f.close()
f = open(filePath, 'w')
f.writelines(lines)
f.close()
print file


Sample lines in File:
================

# time, ap, bp, as, bs, price, vol, size, seq, isUpLast, isUpVol,
isCancel

1062993789 0 0 0 0 1022.75 1 1 0 1 0 0
1073883668 1120 1119.75 28 33 0 0 0 0 0 0 0


Other Info
========

- The file sizes range from 76 Kb to 146 Mb
- I'm running on a Gentoo Linux OS
- The filesystem is partitioned and using: XFS for the data
repository, Reiser3 for all else.

Search Discussions

  • Matt Nordhoff at Aug 28, 2008 at 2:10 pm

    dieter wrote:
    Hi,

    Overview
    =======

    I'm doing some simple file manipulation work and the process gets
    "Killed" everytime I run it. No traceback, no segfault... just the
    word "Killed" in the bash shell and the process ends. The first few
    batch runs would only succeed with one or two files being processed
    (out of 60) before the process was "Killed". Now it makes no
    successful progress at all. Just a little processing then "Killed".
    That isn't a Python thing. Run "sleep 60" in one shell, then "kill -9"
    the process in another shell, and you'll get the same message.

    I know my shared web host has a daemon that does that to processes that
    consume too many resources.

    Wait a minute. If you ran this multiple times, won't it have removed the
    first two lines from the first files multiple times, deleting some data
    you actually care about? I hope you have backups...
    Question
    =======

    Any Ideas? Is there a buffer limitation? Do you think it could be the
    filesystem?
    Any suggestions appreciated.... Thanks.


    The code I'm running:
    ==================

    from glob import glob

    def manipFiles():
    filePathList = glob('/data/ascii/*.dat')
    If that dir is very large, that could be slow. Both because glob will
    run a regexp over every filename, and because it will return a list of
    every file that matches.

    If you have Python 2.5, you could use glob.iglob() instead of
    glob.glob(), which returns an iterator instead of a list.
    for filePath in filePathList:
    f = open(filePath, 'r')
    lines = f.readlines()[2:]
    This reads the entire file into memory. Even better, I bet slicing
    copies the list object temporarily, before the first one is destroyed.
    f.close()
    f = open(filePath, 'w')
    f.writelines(lines)
    f.close()
    print file
    This is unrelated, but "print file" will just say "<type 'file'>",
    because it's the name of a built-in object, and you didn't assign to it
    (which you shouldn't anyway).


    Actually, if you *only* ran that exact code, it should exit almost
    instantly, since it does one import, defines a function, but doesn't
    actually call anything. ;-)
    Sample lines in File:
    ================

    # time, ap, bp, as, bs, price, vol, size, seq, isUpLast, isUpVol,
    isCancel

    1062993789 0 0 0 0 1022.75 1 1 0 1 0 0
    1073883668 1120 1119.75 28 33 0 0 0 0 0 0 0


    Other Info
    ========

    - The file sizes range from 76 Kb to 146 Mb
    - I'm running on a Gentoo Linux OS
    - The filesystem is partitioned and using: XFS for the data
    repository, Reiser3 for all else.
    How about this version? (note: untested)

    import glob
    import os

    def manipFiles():
    # If you don't have Python 2.5, use "glob.glob" instead.
    filePaths = glob.iglob('/data/ascii/*.dat')
    for filePath in filePaths:
    print filePath
    fin = open(filePath, 'rb')
    fout = open(filePath + '.out', 'wb')
    # Discard two lines
    fin.next(); fin.next()
    fout.writelines(fin)
    fin.close()
    fout.close()
    os.rename(filePath + '.out', filePath)

    I don't know how light it will be on CPU, but it should use very little
    memory (unless you have some extremely long lines, I guess). You could
    write a version that just used .read() and .write() in chunks

    Also, it temporarily duplicates "whatever.dat" to "whatever.dat.out",
    and if "whatever.dat.out" already exists, it will blindly overwrite it.

    Also, if this is anything but a one-shot script, you should use
    "try...finally" statements to make sure the file objects get closed (or,
    in Python 2.5, the "with" statement).
    --
  • Glenn Hutchings at Aug 29, 2008 at 7:12 pm

    dieter <vel.accel at gmail.com> writes:

    I'm doing some simple file manipulation work and the process gets
    "Killed" everytime I run it. No traceback, no segfault... just the
    word "Killed" in the bash shell and the process ends. The first few
    batch runs would only succeed with one or two files being processed
    (out of 60) before the process was "Killed". Now it makes no
    successful progress at all. Just a little processing then "Killed".

    Any Ideas? Is there a buffer limitation? Do you think it could be the
    filesystem?
    Any suggestions appreciated.... Thanks.

    The code I'm running:
    ==================

    from glob import glob

    def manipFiles():
    filePathList = glob('/data/ascii/*.dat')
    for filePath in filePathList:
    f = open(filePath, 'r')
    lines = f.readlines()[2:]
    f.close()
    f = open(filePath, 'w')
    f.writelines(lines)
    f.close()
    print file
    Have you checked memory usage while your program is running? Your

    lines = f.readlines()[2:]

    statement will need almost twice the memory of your largest file. This
    might be a problem, depending on your RAM and what else is running at the
    same time.

    If you want to reduce memory usage to almost zero, try reading lines from
    the file and writing all but the first two to a temporary file, then
    renaming the temp file to the original:

    import os

    infile = open(filePath, 'r')
    outfile = open(filePath + '.bak', 'w')

    for num, line in enumerate(infile):
    if num >= 2:
    outfile.write(line)

    infile.close()
    outfile.close()
    os.rename(filePath + '.bak', filePath)

    Glenn
  • Fredrik Lundh at Aug 29, 2008 at 7:37 pm

    Glenn Hutchings wrote:

    Have you checked memory usage while your program is running? Your

    lines = f.readlines()[2:]

    statement will need almost twice the memory of your largest file.
    footnote: list objects contain references to string objects, not the
    strings themselves. the above temporarily creates two list objects, but
    the actual file content is only stored once.

    </F>
  • Paul Boddie at Aug 29, 2008 at 7:25 pm

    On 28 Aug, 07:30, dieter wrote:
    I'm doing some simple file manipulation work and the process gets
    "Killed" everytime I run it. No traceback, no segfault... just the
    word "Killed" in the bash shell and the process ends. The first few
    batch runs would only succeed with one or two files being processed
    (out of 60) before the process was "Killed". Now it makes no
    successful progress at all. Just a little processing then "Killed".
    It might be interesting to check the various limits in your shell. Try
    this command:

    ulimit -a

    Documentation can found in the bash manual page. The limits include
    memory size, CPU time, open file descriptors, and a few other things.

    Paul
  • Fredrik Lundh at Aug 29, 2008 at 7:34 pm

    dieter wrote:

    Any Ideas? Is there a buffer limitation? Do you think it could be the
    filesystem?
    what does "ulimit -a" say?

    </F>
  • Eric Wertman at Aug 30, 2008 at 3:07 pm

    I'm doing some simple file manipulation work and the process gets
    "Killed" everytime I run it. No traceback, no segfault... just the
    word "Killed" in the bash shell and the process ends. The first few
    batch runs would only succeed with one or two files being processed
    (out of 60) before the process was "Killed". Now it makes no
    successful progress at all. Just a little processing then "Killed".
    This is the behavior you'll see when your os has run out of some
    memory resource. The kernel sends a 9 signal. I'm pretty sure that
    if you exceed a soft limit your program will abort with out of memory
    error.

    Eric
  • Dieter h at Sep 2, 2008 at 4:56 am

    On Sat, Aug 30, 2008 at 11:07 AM, Eric Wertman wrote:
    I'm doing some simple file manipulation work and the process gets
    "Killed" everytime I run it. No traceback, no segfault... just the
    word "Killed" in the bash shell and the process ends. The first few
    batch runs would only succeed with one or two files being processed
    (out of 60) before the process was "Killed". Now it makes no
    successful progress at all. Just a little processing then "Killed".
    This is the behavior you'll see when your os has run out of some
    memory resource. The kernel sends a 9 signal. I'm pretty sure that
    if you exceed a soft limit your program will abort with out of memory
    error.

    Eric
    Eric, thank you very much for your response.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedAug 28, '08 at 5:30a
activeSep 2, '08 at 4:56a
posts8
users6
websitepython.org

People

Translate

site design / logo © 2022 Grokbase