Peter Hansen wrote:
The real question might be why would you want to?
If you don't want a thread, but you want something which takes
input from elsewhere and does some processing, then returns
control to some other place until more data is available (which
is what one might assume if threads aren't good for you), there
is already a convenient solution: the subroutine. ;-)
Seriously, what's the requirement driving the need for this?
(I suspect Erik Max's answer is still what you need, and you
misunderstand the nature of what he was suggesting, but if you'll
explain the specific case you have in mind we'll know for sure.)
No, I didn't say that I didn't want a thread. If there are no
serious drawbacks to relying heavily on threads, I'd be glad to
use them. This would be deluxe.
There really isn't any need for a generator, but it is a very
nice way to construct the input side of a program (the caller
doesn't have to worry about expicitly calling initialization and
finalization). Some kind of through-the-looking-glass version
of a generator would be a very nice way to construct the output
I decided to test threads to see what happens:
-------------Warning, Code Follows---------------------
queues = [Queue.Queue()]
threads = 
appendLock = thread.allocate_lock()
def TransferData(queues, atPos):
while len(queues) < atPos + 1:
inputQueue = queues[atPos]
outputQueue = Queue.Queue()
data = inputQueue.get(True)
startTime = time.time()
for i in range(1000):
threads.append(thread.start_new_thread(TransferData, (queues, i)))
print time.time() - startTime
--------------Warning Discussion Follows -------------------------
Running this on my Windows NT 200 NHz machine gave decent results.
With 1000 threads and 1000 Queues, memory used was 24MB. Python
takes almost 10 MB to start, so the overhead per thread + Queue
is only a little over 10 kb. Passing all the messages through
the chain took 2.5 minutes, so we are down around a quarter millisec
for each put or get.
That should be pretty good for many uses.