FAQ
I have set up a master <--> syndic <--> minion configuration but when I run
commands (eg: salt '*' test.ping) on the master, I can see them appear in
the minion logs, but the master doesn't seem to get the result.

I have attached a Vagrantfile and the setup scripts I am using to
install/configure salt (using the salt-bootstrap script). If I don't
specify the git version, then it installs 2014.1.10 and everything works
properly.

Can someone please have a look at my install scripts and see if I am
missing something to make it work in 2014.7.0rc4. I think I have all the
required config properties in the correct place. I have tried this using a
centos 6.5 box as well as an ubuntu 14.04 - neither of which work with
2014.7.0.rc4.

Thanks,

Mark

my Vagrantfile as it won't let me attach it:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're
doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
   config.vm.box = "chef/centos-6.5"

   config.vm.define "master" do |master|
     master.vm.hostname = "master"
     master.vm.network "private_network", ip: "192.168.50.10"
     master.vm.provision :shell, path: "install_salt_master.sh"
   end
   config.vm.define "syndic" do |syndic|
     syndic.vm.network "private_network", ip: "192.168.50.20"
     syndic.vm.hostname = "syndic"
     syndic.vm.provision :shell, path: "install_salt_syndic.sh"
   end
   config.vm.define "minion" do |minion|
     minion.vm.network "private_network", ip: "192.168.50.30"
     minion.vm.hostname = "minion"
     minion.vm.provision :shell, path: "install_salt_minion.sh"
   end
end

--
You received this message because you are subscribed to the Google Groups "Salt-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Mark Gaylard at Oct 21, 2014 at 2:11 am
    A bit more testing and I have found that it works in 2014.7.0rc1, but not
    in 2014.7.0rc2.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jason Wolfe at Oct 22, 2014 at 8:21 pm
    We are seeing the same, we have one true master, 2 syndics connected to the
    true master, and a handful of minions connected through the syndics. We've
    tried failover mode so minions are only connected to one syndic, and
    active/active with the minions are connected to both. In either case, the
    syndics can control all connected minions without issue. When sending
    commands from the true master, the syndics both properly pass on the
    command, which is run on the minions. When the minions return the data
    back to the syndic, the syndics state this error, seeming to claim they
    couldn't write the job cache (which I assume must happen before the return
    data is passed back up the chain to the true master):

    2014-10-22 13:17:42,217 [salt.loaded.int.returner.local_cache
    ][WARNING ] Could not write job cache file for minions: ['xxx', 'xxx',
    'xxx', 'xxx', 'xxx', 'xxx', 'xxx']
    2014-10-22 13:17:42,217 [salt.loaded.int.returner.local_cache
    ][WARNING ] Could not write job invocation cache file: [Errno 2] No such
    file or directory:
    '/var/cache/salt/master/jobs/ec/06e91ef91c801f4f58703a52093454/.load.p'
    2014-10-22 13:17:42,636 [salt.loaded.int.returner.local_cache ][ERROR
       ] An inconsistency occurred, a job was received with a job id that is not
    present in the local cache: 20141022131742212756
    On Monday, October 20, 2014 7:11:00 PM UTC-7, Mark Gaylard wrote:

    A bit more testing and I have found that it works in 2014.7.0rc1, but not
    in 2014.7.0rc2.
    --
    The information in this message may be confidential. It is intended solely
    for
    the addressee(s). If you are not the intended recipient, any disclosure,
    copying or distribution of the message, or any action or omission taken by
    you
    in reliance on it, is prohibited and may be unlawful. Please immediately
    contact the sender if you have received this message in error.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jason Wolfe at Oct 22, 2014 at 8:25 pm
    We are seeing the same on rc4 and rc5.

    We have one true master, 2 syndics connected to the true master, and a
    handful of minions connected through the syndics. We've tried failover
    mode so minions are only connected to one syndic, and active/active with
    the minions are connected to both.

    In either case, the syndics can control all directly connected minions
    without issue. When sending commands from the true master, the syndics
    both properly pass on the command, which is run on the minions. When the
    minions return the data back to the syndic, the syndics state this error,
    seeming to claim they couldn't write the job cache (which I assume must
    happen before the return data is passed back up the chain to the true
    master):

    2014-10-22 13:17:42,217 [salt.loaded.int.returner.local_cache
    ][WARNING ] Could not write job cache file for minions: ['xxx', 'xxx',
    'xxx', 'xxx', 'xxx', 'xxx', 'xxx']
    2014-10-22 13:17:42,217 [salt.loaded.int.returner.local_cache
    ][WARNING ] Could not write job invocation cache file: [Errno 2] No such
    file or directory:
    '/var/cache/salt/master/jobs/ec/06e91ef91c801f4f58703a52093454/.load.p'
    2014-10-22 13:17:42,636 [salt.loaded.int.returner.local_cache ][ERROR
       ] An inconsistency occurred, a job was received with a job id that is not
    present in the local cache: 20141022131742212756
    On Monday, October 20, 2014 7:11:00 PM UTC-7, Mark Gaylard wrote:

    A bit more testing and I have found that it works in 2014.7.0rc1, but not
    in 2014.7.0rc2.
    --
    The information in this message may be confidential. It is intended solely
    for
    the addressee(s). If you are not the intended recipient, any disclosure,
    copying or distribution of the message, or any action or omission taken by
    you
    in reliance on it, is prohibited and may be unlawful. Please immediately
    contact the sender if you have received this message in error.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jason Wolfe at Oct 22, 2014 at 11:12 pm
    So it appears save_load is getting called, while prep_jid never is, so when
    it goes to write the .load.p directory it simply doesn't exist. We are
    using the local job cache, and I've tracked that down to here, so this is
    being run and causing the initial error:

    https://github.com/saltstack/salt/blob/develop/salt/master.py#L2290

    If you do an os.makedirs(jid_dir) here, it resolves the issue:

    https://github.com/saltstack/salt/blob/develop/salt/returners/local_cache.py#L188

    This is obviously not the proper fix, I was just trying to confirm the root
    cause. Is prep_jid supposed to be called somewhere sooner in master.py
    when using the local job cache?

    --
    The information in this message may be confidential. It is intended solely
    for
    the addressee(s). If you are not the intended recipient, any disclosure,
    copying or distribution of the message, or any action or omission taken by
    you
    in reliance on it, is prohibited and may be unlawful. Please immediately
    contact the sender if you have received this message in error.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Jason Wolfe at Oct 22, 2014 at 11:19 pm
    Created an issue on Github, I think we have some solid info for the devs to
    take a look.

    https://github.com/saltstack/salt/issues/16825
    On Wednesday, October 22, 2014 4:12:45 PM UTC-7, Jason Wolfe wrote:

    So it appears save_load is getting called, while prep_jid never is, so
    when it goes to write the .load.p directory it simply doesn't exist. We
    are using the local job cache, and I've tracked that down to here, so this
    is being run and causing the initial error:

    https://github.com/saltstack/salt/blob/develop/salt/master.py#L2290

    If you do an os.makedirs(jid_dir) here, it resolves the issue:


    https://github.com/saltstack/salt/blob/develop/salt/returners/local_cache.py#L188

    This is obviously not the proper fix, I was just trying to confirm the
    root cause. Is prep_jid supposed to be called somewhere sooner in
    master.py when using the local job cache?
    --
    The information in this message may be confidential. It is intended solely
    for
    the addressee(s). If you are not the intended recipient, any disclosure,
    copying or distribution of the message, or any action or omission taken by
    you
    in reliance on it, is prohibited and may be unlawful. Please immediately
    contact the sender if you have received this message in error.

    --
    You received this message because you are subscribed to the Google Groups "Salt-users" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to salt-users+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupsalt-users @
postedOct 21, '14 at 12:59a
activeOct 22, '14 at 11:19p
posts6
users2

2 users in discussion

Jason Wolfe: 4 posts Mark Gaylard: 2 posts

People

Translate

site design / logo © 2022 Grokbase