FAQ
What is the (hopefully unique) obvious way of runnign a sub-process if I
want to get the exit code and input without resorting to multi-threading?

It seems like I should be able to do the following:

foo = popen2.Popen3(cmd)
foo.wait()
foo.fromchild.read()

But, it seems like on my system (a Sun Ultra 30 running Solaris 8) this
will hang if the input happens to have more than about 12k of text. I'm
guessing this is a buffersize issue, but it would be a mistake to assume
at compile time an output size for the child process.

Couldn't wait be modified to actually wait until a process exits by
buffering?

While we're at it, is there any way to read/wait with a time-out? I.e.,
read more than 0 bytes, unless it takes more than a time-out value, so I
can run sub-processes that terminate?

-Dan

Search Discussions

  • Jp Calderone at Feb 12, 2004 at 3:08 pm

    On Thu, Feb 12, 2004 at 06:10:01AM -0800, Daniel Timothy Bentley wrote:
    What is the (hopefully unique) obvious way of runnign a sub-process if I
    want to get the exit code and input without resorting to multi-threading?

    It seems like I should be able to do the following:

    foo = popen2.Popen3(cmd)
    foo.wait()
    foo.fromchild.read()

    But, it seems like on my system (a Sun Ultra 30 running Solaris 8) this
    will hang if the input happens to have more than about 12k of text. I'm
    guessing this is a buffersize issue, but it would be a mistake to assume
    at compile time an output size for the child process.

    Couldn't wait be modified to actually wait until a process exits by
    buffering?

    While we're at it, is there any way to read/wait with a time-out? I.e.,
    read more than 0 bytes, unless it takes more than a time-out value, so I
    can run sub-processes that terminate?
    (Untested)

    from twisted.internet import protocol, reactor

    class PrintyProtocol(protocol.ProcessProtocol):
    bytes = ''
    errBytes = ''

    def outReceived(self, bytes):
    self.bytes += bytes

    def errReceived(self, bytes):
    self.errBytes += bytes

    def processEnded(self, reason):
    print 'Process done. Got:', repr(self.bytes), repr(self.errBytes)
    reactor.stop()

    reactor.spawnProcess(PrintyProtocol(), cmd, args=(cmd,))
    reactor.run()

    Jp
  • Thomas Guettler at Feb 12, 2004 at 4:17 pm

    Am Thu, 12 Feb 2004 06:10:01 -0800 schrieb Daniel Timothy Bentley:

    What is the (hopefully unique) obvious way of runnign a sub-process if I
    want to get the exit code and input without resorting to multi-threading?

    It seems like I should be able to do the following:

    foo = popen2.Popen3(cmd)
    foo.wait()
    foo.fromchild.read()
    I do it like this:

    def shell_command(cmd):
    # There mustnot be output to stdout or stderr
    # otherwise an exception is raised

    p=popen2.Popen4(cmd) # read stdout and stderr
    output=p.fromchild.read()
    ret=p.wait()
    if ret or output:
    raise("Error in shell_command '%s': ret=%s output='%s'" %(
    cmd, ret, output))
  • Daniel Danger Bentley at Feb 12, 2004 at 4:54 pm
    Correct me if I'm wrong, but can't this not catch all the output? Or is
    read in python guaranteed to return all the data that can ever be returned
    (unlike the C library function)?

    -D
    "Thomas Guettler" <guettli at thomas-guettler.de> wrote in message
    news:pan.2004.02.12.16.17.48.957440 at thomas-guettler.de...
    def shell_command(cmd):
    # There mustnot be output to stdout or stderr
    # otherwise an exception is raised

    p=popen2.Popen4(cmd) # read stdout and stderr
    output=p.fromchild.read()
    ret=p.wait()
    if ret or output:
    raise("Error in shell_command '%s': ret=%s output='%s'" %(
    cmd, ret, output))
  • Donn Cave at Feb 12, 2004 at 8:38 pm
    In article <c0gb41$1bl$1 at news.Stanford.EDU>,
    "Daniel Danger Bentley" wrote:
    Correct me if I'm wrong, but can't this not catch all the output? Or is
    read in python guaranteed to return all the data that can ever be returned
    (unlike the C library function)?
    You are indeed wrong, misled by the name - the C library function
    in question is fread(3), which does read all. Not like the read(2)
    system call, which is posix.read (a.k.a. os.read)

    Back to the original question, I think the simplest way to get
    status and output is

    fp = os.popen(cmd, 'r')
    output = fp.read()
    status = os.WEXITSTATUS(fp.close())

    Now if you want a timeout, you'll have to do it yourself, and
    of course the file object isn't the way to go because of its
    underlying buffered read. You can turn off buffering, but then
    you're looking at one system call per byte to implement readline
    et al., so it's not a generally attractive solution. Better to
    get the file descriptor (fileobject.fileno()), and probably use
    select on it. You might also want to do that if you end up
    reading from two different pipes for stderr and stdout, because
    that can lead to the same buffer deadlock you're running into
    now - if I'm reading from stdout while the process writes to
    stderr, or vice versa, it can fill the pipe and block waiting
    for me to empty it, which I won't do because I'm stuck on stdout.

    Donn Cave, donn at u.washington.edu

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedFeb 12, '04 at 2:10p
activeFeb 12, '04 at 8:38p
posts5
users4
websitepython.org

People

Translate

site design / logo © 2022 Grokbase