FAQ
I have what I am sure is a simple problem to which there should be a simple
answer.

I have python program which processes a large number of simulations which
takes a 500 mghrt machine about three weeks to get through. Therefore I
want to divide this labor amongst the 8-12 machines sitting around doing
nothing in the afterhours. I can do this if I have each machine write a
progress statement to a common file (updated roughly every 30 minutes per
machine). Each machine can reference this file to "know" what the next step
of the computations should be and move to that computation. In this way,
multiple instances of the same program run on various machines/platforms can
sort of dynamically leapfrog each other to the finish line.

I am using :

loop structure
progress_stamp=open('....')
sys.stdout=progress_stamp
print scenario
progress_stamp.flush()
progress_stamp.close()
sys.stdout=a_different_output_file
loop:
compuations
print results

To accomplish this it seems that I must close the file (as shown), but once
closed the same instance doesn't seem to write to it again even given a
sys.stdout=progress_stamp command.

Not closing the file means that one process dominates. I need all instances
to be able to read and write to the file - not at the same time. In
general, a process will need rw access to the file to write a single number
before wandering off to perform roughly half an hour of computations.

What am I missing?

Thanks.


Michaell Taylor

===========================

Michaell Taylor
Associate Professor, Dept. of Political Science, NTNU, Trondheim, NORWAY
Senior Economist, Economics Research Group, Reis, New York City, USA
Adjunct Professor, University of Durban, South Africa

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedMay 25, '00 at 4:36p
activeMay 25, '00 at 4:36p
posts1
users1
websitepython.org

1 user in discussion

Michaell Taylor: 1 post

People

Translate

site design / logo © 2022 Grokbase