FAQ
Hi all,

I am writing a state module that will build a distributed system. This
system has a master-slave architecture. The order in which the state is
enforced on each node matters. For example, I need the master node fully
bootstrapped and running before slave nodes join the cluster, so the state
on a slave node should only be run once the state on the master node's run.

Now I know that the goto solution for this is the orchestrate runner. And
yes its true that I could write a sls file that takes pieces of my code and
runs it on individual nodes in the proper order. However, I find this to be
clunky and unnecessarily complicated. I want to be able to essentially
build the orchestration logic directly into the state. That way a user
could simply say "here are my nodes -- put them into the 'cluster' state"
and the state module would take care of the rest.

In other words, I want my state module to describe the state of a
distributed system, which just so happens to have multiple odes in it. I
think salt could handle this usage very well but i don't see a way to do it.

Is there anyway to do this?

Thanks,
Joe

--
You received this message because you are subscribed to the Google Groups "Salt-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Seth House at Feb 27, 2015 at 3:29 pm
    What is involved in putting the slaves into the 'cluster' state? Is it
    running a command on the master (and/or slave nodes)? Or is it more
    simply making info about the slaves available to the master node?

    If it's the simple latter option, you could make use of exposing that
    required info via Salt Mine as part of the state you're writing and
    just let the master node passively pick that info up whenever it
    happens to be configured.

    If it's the less simple former option it will require some kind of
    cross-node coordination. The problem with coordinating actions across
    multiple nodes is that a state is run on a single minion and minions
    cannot communicate directly with one another. The Peer Publish system
    allows a minion to run commands on other minions but those commands
    are proxied (and whitelisted) through the master so you may as well
    just use a more simple master-initiated process instead.

    Either using Orchestrate or a custom Runner module or custom events +
    the Reactor would work well for this from the sound of things. You can
    think of Orchestrate as the master-side equivalent to state runs, and
    Runner modules as the master-side equivalent of execution/state
    modules.

    The canonical Salt way to do this is probably as three separate
    minion-centric sls files: 1) configure the slave nodes, 2) configure
    the master node, 3) join them. And you're right that Orchestrate is
    probably the go-to way to execute those step-by-step. I would be
    interested to hear what you find clunky about that interface with the
    hopes of improving it.

    On Thu, Feb 26, 2015 at 7:09 PM, Joseph Lorenzini wrote:
    Hi all,

    I am writing a state module that will build a distributed system. This
    system has a master-slave architecture. The order in which the state is
    enforced on each node matters. For example, I need the master node fully
    bootstrapped and running before slave nodes join the cluster, so the state
    on a slave node should only be run once the state on the master node's run.

    Now I know that the goto solution for this is the orchestrate runner. And
    yes its true that I could write a sls file that takes pieces of my code and
    runs it on individual nodes in the proper order. However, I find this to be
    clunky and unnecessarily complicated. I want to be able to essentially
    build the orchestration logic directly into the state. That way a user
    could simply say "here are my nodes -- put them into the 'cluster' state"
    and the state module would take care of the rest.

    In other words, I want my state module to describe the state of a
    distributed system, which just so happens to have multiple odes in it. I
    think salt could handle this usage very well but i don't see a way to do it.

    Is there anyway to do this?

    Thanks,
    Joe

    --
    You received this message because you are subscribed to the Google Groups
    "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Joseph Lorenzini at Feb 27, 2015 at 9:00 pm
    Hi Seth,

      "What is involved in putting the slaves into the 'cluster' state?"

    To be clear, its not putting the slaves into a cluster state but rather to
    take a set of nodes and cluster them together. This requires the following
    configuration sequence.


        1. select a master node and install cluster software and then start the
        master.
        2. select the first node that is to be slave. Install cluster software
        on the node, add node into cluster, start the master.
        3. Concurrently perform operations on all other nodes in the cluster:
        install cluster software and add the nodes into the cluster.
        4. Concurrently, start up all other nodes.

    So there are state dependencies across minions that need to be accounted
    for. In other words, minions 1,2,3 can only do X after minions 5,6 do Y.

    "I would be interested to hear what you find clunky about that interface
    with the hopes of improving it."

    Its a design + implementation issue from point of view. This may be driven
    by my specific usage so I'll elaborate on that.

    I am a QA engineer working on developing CI infrastructure that
    automatically deploys my company's cluster software (the cluster is the
    system under test) and then run various automation test suites on the
    software. I am using Saltstack as the tool to deploy cluster software. The
    catch here is that the systems are VMs that are NOT long lived. They will
    live for a short amount of time and I cannot rely on hostnames or IP
    addresses being the same.

    Consequently, to actually use the orchestrate runner in this scenario would
    require repetitive and potentially error prone python program to generate a
    new SLS each time (where hostnames are input) a run is needed since the
    hostnames for targetting will differ each time.

    From a higher level design point of view, I don't understand why I can't
    write a state module that describes a state i want a *distributed system *to
    be in. I find that much easier to reason from then writing complex SLS for
    the orchestrate runner. I want to be able to write a single state module
    that can say 'X,Y,Z' tasks will occur on node 1 once tasks A,B,C occur on
    node 2. I figure salt's message bus would be the perfect communication
    mechanism for that.

    In the SLS, it would become something like this. I find that much easier to
    understand then building the orchestration logic directly into the SLS.

    cluster.join:
             - masternode: hostname1
             - firstslave: hostname2
             - nodes: [node2,node3,node3]


    Joe
    Thanks,
    Joe

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jamie Lawrence at Feb 27, 2015 at 9:15 pm
    Hi Joseph,

    For this piece, we do a lot of state handling with unknown-but-predictable hostnames by enforcing patterns. For instance, and oversimplifying a bit, ci-vm-[something]-NN means "I'm a virtual machine doing CI stuff", the something indicates more detail about the role in the CI process and has a non-predictable number at the end, and our states Do The Right Thing based on that.

    HTH,

    -j


    From: Joseph Lorenzini <jaloren@gmail.com
    Date: Friday, February 27, 2015 at 1:00 PM
    To: "salt-users@googlegroups.com " <salt-users@googlegroups.com
    Subject: Re: [salt-users] Orchestration in Custom State Modules

    Consequently, to actually use the orchestrate runner in this scenario would require repetitive and potentially error prone python program to generate a new SLS each time (where hostnames are input) a run is needed since the hostnames for targetting will differ each time.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Joseph Lorenzini at Feb 28, 2015 at 7:49 pm
    Hi Jamie,

    I am familiar with the approach but I am not a fan for a couple reasons: 1)
    it assumes fine grained control over DNS/DHCP to enforce the hostname
    pattern. 2) that these DNS entries can be dynamically updated/modified
    during VM launch and cleaned up after VM is destroyed 3) that the
    hostnaming convention is reliable and always 'fits' the pattern to avoid
    breakage.

    In my corporate environment, I don't have that kind control over DNS and I
    will never be given that control. All DNS entries have to be manually
    entered by an IT person and can take hours to percolate through the
    network. On top of that DHCP (for a variety of internal reasons) poise a
    ton of problems. So I already have issues (1) and (2).

    As for (3), I just think its asking for trouble to rely on consistence,
    controllable hostnames in a cloud-like environments in which hostnames and
    IP addresses can shift around and disappear on a drop of a hat. I think its
    much more robust to do the following:

        1. launch VMs and instantiate hostname and IP.
        2. use IAAS API (e.g aws cli, openstack nova etc) to retrieve hostnames
        and IPs.
        3. Bootstrap salt on the nodes.
        4. Provide hostname and if necessary IP address as INPUT to either the
        salt state or salt execution module.

    This approach completely sidesteps using a hostname as a UUID for a node
    when you are performing configuration, which is great since there is
    nothing unique about the ID and its short lived anyway.

    This is by the way why I really dislike saltstack's default method of
    assigning the hostname as the UUID of the node. Though I understand why
    this was done due to ease of use. I think openstack had the superior
    approach, where they have the concept of a "display name" which can be
    anything but the actual ID of the node was a true UUID. I wish saltstack
    did something similar but that's whole other topic....

    Joe



    On Fri, Feb 27, 2015 at 3:15 PM, Jamie Lawrence wrote:

    Hi Joseph,

    For this piece, we do a lot of state handling with
    unknown-but-predictable hostnames by enforcing patterns. For instance, and
    oversimplifying a bit, ci-vm-[something]-NN means "I'm a virtual machine
    doing CI stuff", the something indicates more detail about the role in the
    CI process and has a non-predictable number at the end, and our states Do
    The Right Thing based on that.

    HTH,

    -j


    From: Joseph Lorenzini <jaloren@gmail.com>
    Date: Friday, February 27, 2015 at 1:00 PM
    To: "salt-users@googlegroups.com" <salt-users@googlegroups.com>
    Subject: Re: [salt-users] Orchestration in Custom State Modules

    Consequently, to actually use the orchestrate runner in this scenario
    would require repetitive and potentially error prone python program to
    generate a new SLS each time (where hostnames are input) a run is needed
    since the hostnames for targetting will differ each time.

    --
    You received this message because you are subscribed to a topic in the
    Google Groups "Salt-users" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/salt-users/zzQ_DE70vG8/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Joseph Lorenzini at Feb 28, 2015 at 7:50 pm
    Hi Jamie,

    I am familiar with the approach but I am not a fan for a couple reasons: 1)
    it assumes fine grained control over DNS/DHCP to enforce the hostname
    pattern. 2) that these DNS entries can be dynamically updated/modified
    during VM launch and cleaned up after VM is destroyed 3) that the
    hostnaming convention is reliable and always 'fits' the pattern to avoid
    breakage.

    In my corporate environment, I don't have that kind control over DNS and I
    will never be given that control. All DNS entries have to be manually
    entered by an IT person and can take hours to percolate through the
    network. On top of that DHCP (for a variety of internal reasons) poise a
    ton of problems. So I already have issues (1) and (2).

    As for (3), I just think its asking for trouble to rely on consistence,
    controllable hostnames in a cloud-like environments in which hostnames and
    IP addresses can shift around and disappear on a drop of a hat. I think its
    much more robust to do the following:

        1. launch VMs and instantiate hostname and IP.
        2. use IAAS API (e.g aws cli, openstack nova etc) to retrieve hostnames
        and IPs.
        3. Bootstrap salt on the nodes.
        4. Provide hostname and if necessary IP address as INPUT to either the
        salt state or salt execution module.

    This approach completely sidesteps using a hostname as a UUID for a node
    when you are performing configuration, which is great since there is
    nothing unique about the ID and its short lived anyway.

    This is by the way why I really dislike saltstack's default method of
    assigning the hostname as the UUID of the node. Though I understand why
    this was done due to ease of use. I think openstack had the superior
    approach, where they have the concept of a "display name" which can be
    anything but the actual ID of the node was a true UUID. I wish saltstack
    did something similar but that's whole other topic....

    On Friday, February 27, 2015 at 3:15:19 PM UTC-6, Jamie Lawrence wrote:

    Hi Joseph,

    For this piece, we do a lot of state handling with
    unknown-but-predictable hostnames by enforcing patterns. For instance, and
    oversimplifying a bit, ci-vm-[something]-NN means "I'm a virtual machine
    doing CI stuff", the something indicates more detail about the role in the
    CI process and has a non-predictable number at the end, and our states Do
    The Right Thing based on that.

    HTH,

    -j


    From: Joseph Lorenzini <jal...@gmail.com <javascript:>>
    Date: Friday, February 27, 2015 at 1:00 PM
    To: "salt-...@googlegroups.com <javascript:>" <salt-...@googlegroups.com
    <javascript:>>
    Subject: Re: [salt-users] Orchestration in Custom State Modules

    Consequently, to actually use the orchestrate runner in this scenario
    would require repetitive and potentially error prone python program to
    generate a new SLS each time (where hostnames are input) a run is needed
    since the hostnames for targetting will differ each time.
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Seth House at Mar 1, 2015 at 5:09 am
    Hi, Joe. Thanks for the detailed explanation. This is a cool workflow
    that is badly in need of lots more examples and docs.
    On Fri, Feb 27, 2015 at 2:00 PM, Joseph Lorenzini wrote:
    I want to be able to write a single state module
    that can say 'X,Y,Z' tasks will occur on node 1 once tasks A,B,C occur on
    node 2.
    Orchestrate is a (relatively) new addition to Salt that is still
    poorly documented and explored, IMO. Two currently non-obvious things
    about it that suit your use-case are: 1) Orchestrate can take CLI
    arguments, 2) Orchestrate can happily execute custom state modules if
    you enjoy diving in to Python.

    If you want all functionality contained within a single sls file you
    could simply shell-out and it would look like this semi-pseudo-code
    below.

    # /srv/salt/cluster_join.sls
    # Usage:
    # salt-run state.orch cluster_join pillar='{
    # masternode: hostname1,
    # firstslave: hostname2,
    # nodes: node2,node3,node4}'

    setup_masternode:
       salt.function:
         - tgt: {{ pillar.masternode }}
         - name: cmd.run
         - kwarg:
             # Arbitrary shell commands here.
             cmd: |
               command-to-install-cluster-software
               service cluster-software-master start
               any other needed shell commands here

    setup_first_slave:
       salt.function:
         - tgt: {{ pillar.firstslave }}
         - name: cmd.run
         - kwarg:
             cmd: |
               command-to-install-cluster-software
               service cluster-software-master start
               command-to-add-node-into-cluster
         - require:
           - salt: setup_masternode

    setup_remaining_slaves:
       salt.function:
         - tgt: {{ pillar.nodes }}
         - expr_form: list
         - name: cmd.run
         - kwarg:
             cmd: |
               command-to-install-cluster-software
               service cluster-software-master start
               command-to-add-node-into-cluster
         - require:
           - salt: setup_first_slave

    That's one possible configuration of many, many possible. If you don't
    mind it being spread into multiple files you could just as easily make
    use of `cmd.script` or a custom execution module or encapsulate the
    cluster setup commands into their own individual sls files. It uses
    Salt's regular State system so you could use any of the other
    renderers besides YAML. Or you could make use of a custom state module
    if you wanted to write Python. You could even achieve your four-line
    `cluster.join` example _verbatim_ if you wanted to put in a small bit
    of work into a custom renderer module. :-)

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Joseph Lorenzini at Mar 2, 2015 at 1:59 pm
    Hi Seth.

    Ahhhh.....that looks like it addresses my use case perfectly. I am
    definitely going to try this! If it works, then I'll be submitting a PR to
    the documentation so that this is covered. I had no idea that orchestrate
    could handle dependency graphs across nodes like this. Super cool!!

    Thanks,
    Joe
    On Saturday, February 28, 2015 at 11:09:25 PM UTC-6, Seth House wrote:

    Hi, Joe. Thanks for the detailed explanation. This is a cool workflow
    that is badly in need of lots more examples and docs.

    On Fri, Feb 27, 2015 at 2:00 PM, Joseph Lorenzini <jal...@gmail.com
    <javascript:>> wrote:
    I want to be able to write a single state module
    that can say 'X,Y,Z' tasks will occur on node 1 once tasks A,B,C occur on
    node 2.
    Orchestrate is a (relatively) new addition to Salt that is still
    poorly documented and explored, IMO. Two currently non-obvious things
    about it that suit your use-case are: 1) Orchestrate can take CLI
    arguments, 2) Orchestrate can happily execute custom state modules if
    you enjoy diving in to Python.

    If you want all functionality contained within a single sls file you
    could simply shell-out and it would look like this semi-pseudo-code
    below.

    # /srv/salt/cluster_join.sls
    # Usage:
    # salt-run state.orch cluster_join pillar='{
    # masternode: hostname1,
    # firstslave: hostname2,
    # nodes: node2,node3,node4}'

    setup_masternode:
    salt.function:
    - tgt: {{ pillar.masternode }}
    - name: cmd.run
    - kwarg:
    # Arbitrary shell commands here.
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    any other needed shell commands here

    setup_first_slave:
    salt.function:
    - tgt: {{ pillar.firstslave }}
    - name: cmd.run
    - kwarg:
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    command-to-add-node-into-cluster
    - require:
    - salt: setup_masternode

    setup_remaining_slaves:
    salt.function:
    - tgt: {{ pillar.nodes }}
    - expr_form: list
    - name: cmd.run
    - kwarg:
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    command-to-add-node-into-cluster
    - require:
    - salt: setup_first_slave

    That's one possible configuration of many, many possible. If you don't
    mind it being spread into multiple files you could just as easily make
    use of `cmd.script` or a custom execution module or encapsulate the
    cluster setup commands into their own individual sls files. It uses
    Salt's regular State system so you could use any of the other
    renderers besides YAML. Or you could make use of a custom state module
    if you wanted to write Python. You could even achieve your four-line
    `cluster.join` example _verbatim_ if you wanted to put in a small bit
    of work into a custom renderer module. :-)
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Godefroid Chapelle at Mar 2, 2015 at 3:20 pm

    On 01/03/15 06:09, Seth House wrote:
    Hi, Joe. Thanks for the detailed explanation. This is a cool workflow
    that is badly in need of lots more examples and docs.
    On Fri, Feb 27, 2015 at 2:00 PM, Joseph Lorenzini wrote:
    I want to be able to write a single state module
    that can say 'X,Y,Z' tasks will occur on node 1 once tasks A,B,C occur on
    node 2.
    Orchestrate is a (relatively) new addition to Salt that is still
    poorly documented and explored, IMO. Two currently non-obvious things
    about it that suit your use-case are: 1) Orchestrate can take CLI
    arguments, 2) Orchestrate can happily execute custom state modules if
    you enjoy diving in to Python.

    If you want all functionality contained within a single sls file you
    could simply shell-out and it would look like this semi-pseudo-code
    below.

    # /srv/salt/cluster_join.sls
    # Usage:
    # salt-run state.orch cluster_join pillar='{
    # masternode: hostname1,
    # firstslave: hostname2,
    # nodes: node2,node3,node4}'

    setup_masternode:
    salt.function:
    - tgt: {{ pillar.masternode }}
    - name: cmd.run
    - kwarg:
    # Arbitrary shell commands here.
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    any other needed shell commands here

    setup_first_slave:
    salt.function:
    - tgt: {{ pillar.firstslave }}
    - name: cmd.run
    - kwarg:
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    command-to-add-node-into-cluster
    - require:
    - salt: setup_masternode

    setup_remaining_slaves:
    salt.function:
    - tgt: {{ pillar.nodes }}
    - expr_form: list
    - name: cmd.run
    - kwarg:
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    command-to-add-node-into-cluster
    - require:
    - salt: setup_first_slave

    That's one possible configuration of many, many possible. If you don't
    mind it being spread into multiple files you could just as easily make
    use of `cmd.script` or a custom execution module or encapsulate the
    cluster setup commands into their own individual sls files. It uses
    Salt's regular State system so you could use any of the other
    renderers besides YAML. Or you could make use of a custom state module
    if you wanted to write Python. You could even achieve your four-line
    `cluster.join` example _verbatim_ if you wanted to put in a small bit
    of work into a custom renderer module. :-)
    I have been playing with orchestrate. Instead of using cmd.run, I am
    using custom modules. However, I did not find what I am supposed to do
    in those modules if I want to avoid that orchestrate moves on with the
    depending nodes if setup on the master node fails.

    What is the recommended way ? special return code ? raising exception ?

    Thanks

    --
    Godefroid Chapelle

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Seth House at Mar 4, 2015 at 6:03 pm
    The easiest thing to do is to raise a CommandExecutionError error.
    That will display a message to the user and fail the run.

    http://docs.saltstack.com/en/latest/ref/internals/salt.exceptions.html#salt.exceptions.CommandExecutionError
    On Mar 2, 2015 8:20 AM, "Godefroid Chapelle" wrote:
    I have been playing with orchestrate. Instead of using cmd.run, I am using custom modules. However, I did not find what I am supposed to do in those modules if I want to avoid that orchestrate moves on with the depending nodes if setup on the master node fails.
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Joseph Lorenzini at Mar 6, 2015 at 12:52 am
    Hi Seth,

    So I have been experimenting with your suggestion. I've run into a problem
    though. The pillar data from the command line is used in the sls file I
    pass directly to the orchestrate runner function. If the SLS file in turn
    calls another SLS file, then *that *SLS file does NOT have access to the
    pillar data. Is this by design? Or is there a way to expose this data to
    other SLS file?

    Thanks,
    Joe
    On Saturday, February 28, 2015 at 11:09:25 PM UTC-6, Seth House wrote:

    Hi, Joe. Thanks for the detailed explanation. This is a cool workflow
    that is badly in need of lots more examples and docs.

    On Fri, Feb 27, 2015 at 2:00 PM, Joseph Lorenzini <jal...@gmail.com
    <javascript:>> wrote:
    I want to be able to write a single state module
    that can say 'X,Y,Z' tasks will occur on node 1 once tasks A,B,C occur on
    node 2.
    Orchestrate is a (relatively) new addition to Salt that is still
    poorly documented and explored, IMO. Two currently non-obvious things
    about it that suit your use-case are: 1) Orchestrate can take CLI
    arguments, 2) Orchestrate can happily execute custom state modules if
    you enjoy diving in to Python.

    If you want all functionality contained within a single sls file you
    could simply shell-out and it would look like this semi-pseudo-code
    below.

    # /srv/salt/cluster_join.sls
    # Usage:
    # salt-run state.orch cluster_join pillar='{
    # masternode: hostname1,
    # firstslave: hostname2,
    # nodes: node2,node3,node4}'

    setup_masternode:
    salt.function:
    - tgt: {{ pillar.masternode }}
    - name: cmd.run
    - kwarg:
    # Arbitrary shell commands here.
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    any other needed shell commands here

    setup_first_slave:
    salt.function:
    - tgt: {{ pillar.firstslave }}
    - name: cmd.run
    - kwarg:
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    command-to-add-node-into-cluster
    - require:
    - salt: setup_masternode

    setup_remaining_slaves:
    salt.function:
    - tgt: {{ pillar.nodes }}
    - expr_form: list
    - name: cmd.run
    - kwarg:
    cmd: |
    command-to-install-cluster-software
    service cluster-software-master start
    command-to-add-node-into-cluster
    - require:
    - salt: setup_first_slave

    That's one possible configuration of many, many possible. If you don't
    mind it being spread into multiple files you could just as easily make
    use of `cmd.script` or a custom execution module or encapsulate the
    cluster setup commands into their own individual sls files. It uses
    Salt's regular State system so you could use any of the other
    renderers besides YAML. Or you could make use of a custom state module
    if you wanted to write Python. You could even achieve your four-line
    `cluster.join` example _verbatim_ if you wanted to put in a small bit
    of work into a custom renderer module. :-)
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Seth House at Mar 8, 2015 at 5:18 am
    If the second sls file is one run on minions, you can pass the pillar
    data through via the kwarg on `salt.state`:

    setup_first_slave:
       salt.state:
         - tgt: {{ pillar.firstslave }}
         - sls: do_stuff_on_slave
         - pillar: {{ pillar | json() }}
         - require:
           - salt: setup_masternode
    On Mar 5, 2015 5:52 PM, "Joseph Lorenzini" wrote:

    Hi Seth,

    So I have been experimenting with your suggestion. I've run into a problem though. The pillar data from the command line is used in the sls file I pass directly to the orchestrate runner function. If the SLS file in turn calls another SLS file, then that SLS file does NOT have access to the pillar data. Is this by design? Or is there a way to expose this data to other SLS file?
    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • David Maze at Mar 2, 2015 at 1:05 pm

    On Friday, February 27, 2015 at 4:00:31 PM UTC-5, Joseph Lorenzini wrote:

    1. select a master node and install cluster software and then start
    the master.
    2. select the first node that is to be slave. Install cluster software
    on the node, add node into cluster, start the master.
    3. Concurrently perform operations on all other nodes in the cluster:
    install cluster software and add the nodes into the cluster.
    4. Concurrently, start up all other nodes.

    I've had reasonable luck in the past using the Salt Mine for this sort of
    operation; but the trick is that, in order to be able to have multiple
    deployments, you need a syndic with a copy of the state tree for each
    deployment.

    If you can get that set up, then in the orchestrate .sls file, you can
    match on a "roles" grain; you can test

    {{ 'master' in grains.get('roles', []) }}

    to see if the current node is the master; and you can get the master's IP
    address with

    {{ salt.mine.get('roles:master', 'network.ip_addrs', 'grain')|first|first }}

    This is totally decoupled from the actual host names or Salt node IDs.
      Something like

    {{ salt.mine.get('roles:master', 'grains.items', 'grain')|first['fqdn'] }}

    would use what each node believes its own DNS name is.

    Doing this means making sure that you are deploying a pillar like

    mine_functions:
       network.ip_addrs: []
       grains.items: []

    Deploying a syndic seems to be something of a black art, but using a map
    file like this works (on the initial deploy but not attempts to update it):

    t2.small:
       - sub-master:
           make_master: true
           minion:
             master: upstream-master.example.com
             local_master: true
           master:
             syndic_master: upstream-master.example.com
           grains: {roles: [salt-master]}
    m3.xlarge:
       - project-master: {grains: {roles: [master]}}
       - project-slave01: {grains: {roles: [firstslave]}}
       - project-slave02: {grains: {roles: [slave]}}
       - project-slave03: {grains: {roles: [slave]}}

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupsalt-users @
postedFeb 27, '15 at 2:09a
activeMar 8, '15 at 5:18a
posts13
users5

People

Translate

site design / logo © 2022 Grokbase