FAQ
Hello, this is a small tutorial page about NESL, an easy parallel
programming language:
http://www-2.cs.cmu.edu/~scandal/nesl/tutorial2.html

Its syntax shares some similarities with python one, for example:

function factorial(n) =
if (n <= 1) then 1
else n*factorial(n-1);

{factorial(i) : i in [3, 1, 7]};

This computes "in parallel, for each i in the sequence [3, 1, 7],
factorial i"


{sum(a) : a in [[2,3], [8,3,9], [7]]};

sum of sequences is already a parallel operation, so this is a nested
parallelism example.

So it seems to me that Python can be already fit to be interpreted in
parallel, for multicore CPUs, Playstation Cell-like processors, etc.
(few things have to be changed/added in the syntax to make it fit for
parallelism).

Hugs,
bearophile

Search Discussions

  • Josiah Carlson at Nov 17, 2004 at 6:37 pm

    bearophileHUGS at lycos.com (bearophile) wrote:
    Hello, this is a small tutorial page about NESL, an easy parallel
    programming language:
    http://www-2.cs.cmu.edu/~scandal/nesl/tutorial2.html

    Its syntax shares some similarities with python one, for example:

    function factorial(n) =
    if (n <= 1) then 1
    else n*factorial(n-1);

    {factorial(i) : i in [3, 1, 7]};

    This computes "in parallel, for each i in the sequence [3, 1, 7],
    factorial i"


    {sum(a) : a in [[2,3], [8,3,9], [7]]};

    sum of sequences is already a parallel operation, so this is a nested
    parallelism example.
    The look of a language has nothing to do with its parallelizability. It
    just so happens that the designers of NESL had a similar language design
    ideas as the designers of Python.

    So it seems to me that Python can be already fit to be interpreted in
    parallel, for multicore CPUs, Playstation Cell-like processors, etc.
    (few things have to be changed/added in the syntax to make it fit for
    parallelism).
    There are various other reasons why Python is not as parallelizable as
    you would think. Among them is the semantics of scoping, and whether
    there is shared or unshared scope among the processors/nodes. If shared,
    then any operation that could change scopes would need to be distributed
    (ick), or if unshared, then you are basically looking at an
    automatically distributed tuplespace (LINDA). It gets even uglier with
    certain kinds of generators.

    Regardless of which one is the case, heavy modifications to Python would
    necessarily need to be done in order to make them happen.


    - Josiah
  • Jon at Nov 18, 2004 at 1:06 am
    Josiah Carlson <jcarlson at uci.edu> wrote in message news:<mailman.6501.1100717187.5135.python-list at python.org>...
    bearophileHUGS at lycos.com (bearophile) wrote:
    Hello, this is a small tutorial page about NESL, an easy parallel
    programming language:
    http://www-2.cs.cmu.edu/~scandal/nesl/tutorial2.html

    Its syntax shares some similarities with python one, for example:

    function factorial(n) =
    if (n <= 1) then 1
    else n*factorial(n-1);

    {factorial(i) : i in [3, 1, 7]};

    This computes "in parallel, for each i in the sequence [3, 1, 7],
    factorial i"


    {sum(a) : a in [[2,3], [8,3,9], [7]]};

    sum of sequences is already a parallel operation, so this is a nested
    parallelism example.
    The look of a language has nothing to do with its parallelizability. It
    just so happens that the designers of NESL had a similar language design
    ideas as the designers of Python.

    So it seems to me that Python can be already fit to be interpreted in
    parallel, for multicore CPUs, Playstation Cell-like processors, etc.
    (few things have to be changed/added in the syntax to make it fit for
    parallelism).
    There are various other reasons why Python is not as parallelizable as
    you would think. Among them is the semantics of scoping, and whether
    there is shared or unshared scope among the processors/nodes. If shared,
    then any operation that could change scopes would need to be distributed
    (ick), or if unshared, then you are basically looking at an
    automatically distributed tuplespace (LINDA). It gets even uglier with
    certain kinds of generators.

    Regardless of which one is the case, heavy modifications to Python would
    necessarily need to be done in order to make them happen.


    - Josiah
    Even considering the above caveats, one can still employ Python based
    interpretive layers such as pyMPI over quite solid parallel computing
    tools such as MPI. See http://pympi.sourceforge.net/.

    --Jon
  • Josiah Carlson at Nov 18, 2004 at 3:29 am

    jhujsak at neotopica.com (Jon) wrote:
    Josiah Carlson <jcarlson at uci.edu> wrote in message news:<mailman.6501.1100717187.5135.python-list at python.org>...
    There are various other reasons why Python is not as parallelizable as
    you would think. Among them is the semantics of scoping, and whether
    there is shared or unshared scope among the processors/nodes. If shared,
    then any operation that could change scopes would need to be distributed
    (ick), or if unshared, then you are basically looking at an
    automatically distributed tuplespace (LINDA). It gets even uglier with
    certain kinds of generators.

    Regardless of which one is the case, heavy modifications to Python would
    necessarily need to be done in order to make them happen.


    - Josiah
    Even considering the above caveats, one can still employ Python based
    interpretive layers such as pyMPI over quite solid parallel computing
    tools such as MPI. See http://pympi.sourceforge.net/.
    Indeed. I wrote the equivalent of pyMPI in the spring of 2002 for an
    undergraduate senior project. It was never a matter of "can
    parallelization be done", it was a matter of "can loops be automatically
    parallelized".


    - Josiah
  • Corey Coughlin at Nov 18, 2004 at 1:29 am
    Josiah Carlson <jcarlson at uci.edu> wrote in message news:<mailman.6501.1100717187.5135.python-list at python.org>...
    There are various other reasons why Python is not as parallelizable as
    you would think. Among them is the semantics of scoping, and whether
    there is shared or unshared scope among the processors/nodes. If shared,
    then any operation that could change scopes would need to be distributed
    (ick), or if unshared, then you are basically looking at an
    automatically distributed tuplespace (LINDA). It gets even uglier with
    certain kinds of generators.

    Regardless of which one is the case, heavy modifications to Python would
    necessarily need to be done in order to make them happen.


    - Josiah
    Well, I'm not sure it's necessarily that grim. Generally, taking an
    inference engine something like that proposed for Starkiller, tracking
    variable types as closely as possible, adding some data flow
    capability to follow the execution path, you could probably do
    something useful. With Python's for..in loop syntax, the prohibition
    against chaning loop variables there, and list comprehensions,
    parallelized loop unrolling, done intelligently, could help out a lot
    on parallel architectures. Sure, it'd be hard to do effectively in a
    strictly interpreted environment, but if Starkiller ever comes out, it
    seems almost inevitable. Then again, that project seems to be getting
    later all the time. :(
  • Josiah Carlson at Nov 18, 2004 at 3:39 am

    corey.coughlin at attbi.com (Corey Coughlin) wrote:
    Josiah Carlson <jcarlson at uci.edu> wrote in message news:<mailman.6501.1100717187.5135.python-list at python.org>...
    There are various other reasons why Python is not as parallelizable as
    you would think. Among them is the semantics of scoping, and whether
    there is shared or unshared scope among the processors/nodes. If shared,
    then any operation that could change scopes would need to be distributed
    (ick), or if unshared, then you are basically looking at an
    automatically distributed tuplespace (LINDA). It gets even uglier with
    certain kinds of generators.

    Regardless of which one is the case, heavy modifications to Python would
    necessarily need to be done in order to make them happen.


    - Josiah
    Well, I'm not sure it's necessarily that grim. Generally, taking an
    inference engine something like that proposed for Starkiller, tracking
    variable types as closely as possible, adding some data flow
    capability to follow the execution path, you could probably do
    something useful. With Python's for..in loop syntax, the prohibition
    against chaning loop variables there, and list comprehensions,
    parallelized loop unrolling, done intelligently, could help out a lot
    on parallel architectures. Sure, it'd be hard to do effectively in a
    strictly interpreted environment, but if Starkiller ever comes out, it
    seems almost inevitable. Then again, that project seems to be getting
    later all the time. :(

    One could also just add a debugger that distributes data via pickles as
    information changes, so data types don't really matter.

    The real issue is that /anything/ can have an arbitrary side-effect, and
    in order for 'for i in j' parallelization to occur consistantly, those
    side-effects must be handled properly. Those side-effects can be
    horribly ugly.


    While I say "huzzah" for new languages (or preprocessors for older
    languages) that make parallelization occur 'automatically', there is
    something to be said about manually parallelizing your algorithms with
    MPI, Linda, PVM, etc. At least then you can be explicit with your
    communication and not be afraid that your desired changes may or may not
    be transferred (a waste of bandwidth, scope overwriting, not getting
    updated data, etc).

    - Josiah

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedNov 17, '04 at 10:39a
activeNov 18, '04 at 3:39a
posts6
users4
websitepython.org

People

Translate

site design / logo © 2023 Grokbase