Just an update at Michael's request - seeing the exact same situations,
with ec2.

Setting this environment variable fixes this.
On Thursday, 12 September 2013 15:34:33 UTC-7, Michael Blakeley wrote:
On Thursday, September 12, 2013 3:21:23 PM UTC-7, James Cammarata wrote:

I believe the initial iteration through the hosts is single-threaded, as
that occurs before the forks are created, however can you demonstrate that
your configuration is causing single-threaded behavior after the forks are
running?
Yes, I think so. I observe single-threading for every command throughout
long playbooks. Setting ANSIBLE_HOST_KEY_CHECKING=no resolves that.

Does the output from this single command help?

$ ansible -i ec2.py tag_Name_test -f 9 -a date
ec2-54-200-43-114.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:33 UTC 2013
ec2-54-200-40-223.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:35 UTC 2013
ec2-54-200-33-219.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:36 UTC 2013
ec2-54-200-40-249.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:38 UTC 2013
ec2-54-200-43-44.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:40 UTC 2013
ec2-54-200-43-42.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:42 UTC 2013
ec2-54-200-40-224.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:41 UTC 2013
ec2-54-200-42-181.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:43 UTC 2013
ec2-54-200-42-164.us-west-2.compute.amazonaws.com | success | rc=0 >>
Thu Sep 12 21:23:44 UTC 2013

With ANSIBLE_HOST_KEY_CHECKING=no, the results return much more quickly
and all nine hosts display the same time (within 1-2 sec anyway).
--
You received this message because you are subscribed to the Google Groups "Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Michael DeHaan at Sep 29, 2014 at 2:39 pm
    Any chance I can get a copy of your known_hosts file?

    Off list would be preferred.

    I'm not sure that's it, but I suspect it could be.


    On Mon, Sep 29, 2014 at 10:35 AM, Vincent Janelle wrote:

    Just an update at Michael's request - seeing the exact same situations,
    with ec2.

    Setting this environment variable fixes this.
    On Thursday, 12 September 2013 15:34:33 UTC-7, Michael Blakeley wrote:
    On Thursday, September 12, 2013 3:21:23 PM UTC-7, James Cammarata wrote:

    I believe the initial iteration through the hosts is single-threaded, as
    that occurs before the forks are created, however can you demonstrate that
    your configuration is causing single-threaded behavior after the forks are
    running?
    Yes, I think so. I observe single-threading for every command throughout
    long playbooks. Setting ANSIBLE_HOST_KEY_CHECKING=no resolves that.

    Does the output from this single command help?

    $ ansible -i ec2.py tag_Name_test -f 9 -a date
    ec2-54-200-43-114.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:33 UTC 2013
    ec2-54-200-40-223.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:35 UTC 2013
    ec2-54-200-33-219.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:36 UTC 2013
    ec2-54-200-40-249.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:38 UTC 2013
    ec2-54-200-43-44.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:40 UTC 2013
    ec2-54-200-43-42.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:42 UTC 2013
    ec2-54-200-40-224.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:41 UTC 2013
    ec2-54-200-42-181.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:43 UTC 2013
    ec2-54-200-42-164.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:44 UTC 2013

    With ANSIBLE_HOST_KEY_CHECKING=no, the results return much more quickly
    and all nine hosts display the same time (within 1-2 sec anyway).
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com?utm_medium=email&utm_source=footer>
    .
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgwfye7fvxRN7AiKVWHPXS9D3bxdEfRA-R7kMB5iPpZFmQ%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Vincent Janelle at Sep 29, 2014 at 2:44 pm
    Not sure how I'd send you a copy of /dev/null, unless ansible is attempting
    to parse the contents of ~/.ssh/known_hosts outside of ssh.
    On Monday, 29 September 2014 07:39:20 UTC-7, Michael DeHaan wrote:

    Any chance I can get a copy of your known_hosts file?

    Off list would be preferred.

    I'm not sure that's it, but I suspect it could be.



    On Mon, Sep 29, 2014 at 10:35 AM, Vincent Janelle <randomf...@gmail.com
    <javascript:>> wrote:
    Just an update at Michael's request - seeing the exact same situations,
    with ec2.

    Setting this environment variable fixes this.
    On Thursday, 12 September 2013 15:34:33 UTC-7, Michael Blakeley wrote:
    On Thursday, September 12, 2013 3:21:23 PM UTC-7, James Cammarata wrote:

    I believe the initial iteration through the hosts is single-threaded,
    as that occurs before the forks are created, however can you demonstrate
    that your configuration is causing single-threaded behavior after the forks
    are running?
    Yes, I think so. I observe single-threading for every command throughout
    long playbooks. Setting ANSIBLE_HOST_KEY_CHECKING=no resolves that.

    Does the output from this single command help?

    $ ansible -i ec2.py tag_Name_test -f 9 -a date
    ec2-54-200-43-114.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:33 UTC 2013
    ec2-54-200-40-223.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:35 UTC 2013
    ec2-54-200-33-219.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:36 UTC 2013
    ec2-54-200-40-249.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:38 UTC 2013
    ec2-54-200-43-44.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:40 UTC 2013
    ec2-54-200-43-42.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:42 UTC 2013
    ec2-54-200-40-224.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:41 UTC 2013
    ec2-54-200-42-181.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:43 UTC 2013
    ec2-54-200-42-164.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:44 UTC 2013

    With ANSIBLE_HOST_KEY_CHECKING=no, the results return much more quickly
    and all nine hosts display the same time (within 1-2 sec anyway).
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-proje...@googlegroups.com <javascript:>.
    To post to this group, send email to ansible...@googlegroups.com
    <javascript:>.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com?utm_medium=email&utm_source=footer>
    .
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • James Cammarata at Sep 29, 2014 at 3:05 pm
    Hi Vincent, could you share a sample of the playbook you're running as well
    as the results of running it with -f1, -f2 and -f4? That should determine
    if the playbook is indeed being serialized at some point.

    Do note, however, if you're doing something like this:

    - local_action: ec2 ...
       with_items:
         - ...
         - ...
         - ...

    you will see serialized performance. This is caused by the fact that each
    pass through with_* loops must complete on all hosts before the next loop
    begins, and with local_action you'd only be executing on a single host
    (localhost), so this would constrain the playbook to a serial-like
    performance.

    Thanks!
    On Mon, Sep 29, 2014 at 9:44 AM, Vincent Janelle wrote:

    Not sure how I'd send you a copy of /dev/null, unless ansible is
    attempting to parse the contents of ~/.ssh/known_hosts outside of ssh.
    On Monday, 29 September 2014 07:39:20 UTC-7, Michael DeHaan wrote:

    Any chance I can get a copy of your known_hosts file?

    Off list would be preferred.

    I'm not sure that's it, but I suspect it could be.



    On Mon, Sep 29, 2014 at 10:35 AM, Vincent Janelle <randomf...@gmail.com>
    wrote:
    Just an update at Michael's request - seeing the exact same situations,
    with ec2.

    Setting this environment variable fixes this.
    On Thursday, 12 September 2013 15:34:33 UTC-7, Michael Blakeley wrote:
    On Thursday, September 12, 2013 3:21:23 PM UTC-7, James Cammarata wrote:

    I believe the initial iteration through the hosts is single-threaded,
    as that occurs before the forks are created, however can you demonstrate
    that your configuration is causing single-threaded behavior after the forks
    are running?
    Yes, I think so. I observe single-threading for every command
    throughout long playbooks. Setting ANSIBLE_HOST_KEY_CHECKING=no resolves
    that.

    Does the output from this single command help?

    $ ansible -i ec2.py tag_Name_test -f 9 -a date
    ec2-54-200-43-114.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:33 UTC 2013
    ec2-54-200-40-223.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:35 UTC 2013
    ec2-54-200-33-219.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:36 UTC 2013
    ec2-54-200-40-249.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:38 UTC 2013
    ec2-54-200-43-44.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:40 UTC 2013
    ec2-54-200-43-42.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:42 UTC 2013
    ec2-54-200-40-224.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:41 UTC 2013
    ec2-54-200-42-181.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:43 UTC 2013
    ec2-54-200-42-164.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:44 UTC 2013

    With ANSIBLE_HOST_KEY_CHECKING=no, the results return much more quickly
    and all nine hosts display the same time (within 1-2 sec anyway).
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/a828f04b-369e-4b75-acb2-
    522903aadbe0%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com?utm_medium=email&utm_source=footer>
    .
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CAMFyvFiYHSn8GF_kMNx6ea70z7RV%2ByRoGQd%2B8M-%2BSZ0AMC4%3DSQ%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Vincent Janelle at Sep 29, 2014 at 3:37 pm
    Exactly like what was described at the start of this thread. :( Setting
    the environment variable produces the desired parallel execution.
    On Monday, 29 September 2014 08:05:59 UTC-7, James Cammarata wrote:

    Hi Vincent, could you share a sample of the playbook you're running as
    well as the results of running it with -f1, -f2 and -f4? That should
    determine if the playbook is indeed being serialized at some point.

    Do note, however, if you're doing something like this:

    - local_action: ec2 ...
    with_items:
    - ...
    - ...
    - ...

    you will see serialized performance. This is caused by the fact that each
    pass through with_* loops must complete on all hosts before the next loop
    begins, and with local_action you'd only be executing on a single host
    (localhost), so this would constrain the playbook to a serial-like
    performance.

    Thanks!

    On Mon, Sep 29, 2014 at 9:44 AM, Vincent Janelle <randomf...@gmail.com
    <javascript:>> wrote:
    Not sure how I'd send you a copy of /dev/null, unless ansible is
    attempting to parse the contents of ~/.ssh/known_hosts outside of ssh.
    On Monday, 29 September 2014 07:39:20 UTC-7, Michael DeHaan wrote:

    Any chance I can get a copy of your known_hosts file?

    Off list would be preferred.

    I'm not sure that's it, but I suspect it could be.



    On Mon, Sep 29, 2014 at 10:35 AM, Vincent Janelle <randomf...@gmail.com>
    wrote:
    Just an update at Michael's request - seeing the exact same situations,
    with ec2.

    Setting this environment variable fixes this.
    On Thursday, 12 September 2013 15:34:33 UTC-7, Michael Blakeley wrote:

    On Thursday, September 12, 2013 3:21:23 PM UTC-7, James Cammarata
    wrote:
    I believe the initial iteration through the hosts is single-threaded,
    as that occurs before the forks are created, however can you demonstrate
    that your configuration is causing single-threaded behavior after the forks
    are running?
    Yes, I think so. I observe single-threading for every command
    throughout long playbooks. Setting ANSIBLE_HOST_KEY_CHECKING=no resolves
    that.

    Does the output from this single command help?

    $ ansible -i ec2.py tag_Name_test -f 9 -a date
    ec2-54-200-43-114.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:33 UTC 2013
    ec2-54-200-40-223.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:35 UTC 2013
    ec2-54-200-33-219.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:36 UTC 2013
    ec2-54-200-40-249.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:38 UTC 2013
    ec2-54-200-43-44.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:40 UTC 2013
    ec2-54-200-43-42.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:42 UTC 2013
    ec2-54-200-40-224.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:41 UTC 2013
    ec2-54-200-42-181.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:43 UTC 2013
    ec2-54-200-42-164.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:44 UTC 2013

    With ANSIBLE_HOST_KEY_CHECKING=no, the results return much more
    quickly and all nine hosts display the same time (within 1-2 sec anyway).
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/a828f04b-369e-4b75-acb2-
    522903aadbe0%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com?utm_medium=email&utm_source=footer>
    .
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-proje...@googlegroups.com <javascript:>.
    To post to this group, send email to ansible...@googlegroups.com
    <javascript:>.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/c6313bab-23a5-4f5b-874c-826fdd029ace%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael DeHaan at Sep 29, 2014 at 4:28 pm
    Ansible does read ~/.ssh/known_hosts because it needs to know whether to
    lock itself down to 1 process to ask you the question about adding a new
    hosts to known_hosts.

    This only happens when it detects a host isn't already there, because it
    must detect this before SSH asks.

    And this only happens with -c ssh, -c paramiko has it's own handling (and
    it's own issues, I prefer the SSH implementation if folks have a new enough
    SSH to use ControlPersist).


    On Mon, Sep 29, 2014 at 11:37 AM, Vincent Janelle wrote:

    Exactly like what was described at the start of this thread. :( Setting
    the environment variable produces the desired parallel execution.
    On Monday, 29 September 2014 08:05:59 UTC-7, James Cammarata wrote:

    Hi Vincent, could you share a sample of the playbook you're running as
    well as the results of running it with -f1, -f2 and -f4? That should
    determine if the playbook is indeed being serialized at some point.

    Do note, however, if you're doing something like this:

    - local_action: ec2 ...
    with_items:
    - ...
    - ...
    - ...

    you will see serialized performance. This is caused by the fact that each
    pass through with_* loops must complete on all hosts before the next loop
    begins, and with local_action you'd only be executing on a single host
    (localhost), so this would constrain the playbook to a serial-like
    performance.

    Thanks!

    On Mon, Sep 29, 2014 at 9:44 AM, Vincent Janelle <randomf...@gmail.com>
    wrote:
    Not sure how I'd send you a copy of /dev/null, unless ansible is
    attempting to parse the contents of ~/.ssh/known_hosts outside of ssh.
    On Monday, 29 September 2014 07:39:20 UTC-7, Michael DeHaan wrote:

    Any chance I can get a copy of your known_hosts file?

    Off list would be preferred.

    I'm not sure that's it, but I suspect it could be.



    On Mon, Sep 29, 2014 at 10:35 AM, Vincent Janelle <randomf...@gmail.com
    wrote:
    Just an update at Michael's request - seeing the exact same
    situations, with ec2.

    Setting this environment variable fixes this.
    On Thursday, 12 September 2013 15:34:33 UTC-7, Michael Blakeley wrote:

    On Thursday, September 12, 2013 3:21:23 PM UTC-7, James Cammarata
    wrote:
    I believe the initial iteration through the hosts is
    single-threaded, as that occurs before the forks are created, however can
    you demonstrate that your configuration is causing single-threaded behavior
    after the forks are running?
    Yes, I think so. I observe single-threading for every command
    throughout long playbooks. Setting ANSIBLE_HOST_KEY_CHECKING=no resolves
    that.

    Does the output from this single command help?

    $ ansible -i ec2.py tag_Name_test -f 9 -a date
    ec2-54-200-43-114.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:33 UTC 2013
    ec2-54-200-40-223.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:35 UTC 2013
    ec2-54-200-33-219.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:36 UTC 2013
    ec2-54-200-40-249.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:38 UTC 2013
    ec2-54-200-43-44.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:40 UTC 2013
    ec2-54-200-43-42.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:42 UTC 2013
    ec2-54-200-40-224.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:41 UTC 2013
    ec2-54-200-42-181.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:43 UTC 2013
    ec2-54-200-42-164.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:44 UTC 2013

    With ANSIBLE_HOST_KEY_CHECKING=no, the results return much more
    quickly and all nine hosts display the same time (within 1-2 sec anyway).
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%
    40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com?utm_medium=email&utm_source=footer>
    .
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/feb40220-6a8b-4813-826d-
    5447e9f1b3a8%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/c6313bab-23a5-4f5b-874c-826fdd029ace%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/c6313bab-23a5-4f5b-874c-826fdd029ace%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgwSW-E3SDmVfQ8MEMkngfDfRdRz2D63iS2hKaHKBTNvxA%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael Blakeley at Sep 29, 2014 at 4:29 pm
    Vincent, I now use a slightly different workaround. Instead of routing
    known_hosts to /dev/null I route it to a temp file. This keeps the EC2
    noise out of my default known_hosts file, and seems to play well with
    ansible.

    From my ~/.ssh/config file:
    Host *.amazonaws.com
          PasswordAuthentication no
          StrictHostKeyChecking no
          UserKnownHostsFile /tmp/ec2_known_hosts
          User ec2-user


    Hope that helps you.

    -- Mike
    On Monday, September 29, 2014 8:37:43 AM UTC-7, Vincent Janelle wrote:

    Exactly like what was described at the start of this thread. :( Setting
    the environment variable produces the desired parallel execution.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael DeHaan at Sep 29, 2014 at 4:30 pm
    So I'm confused - are you saying you are using known_hosts that are empty?

    This seems to be a completely unrelated question.

    The mention of /dev/null above seemed to be based on confusion that we
    didn't read it, not that it was actually symlinked to /dev/null.

    Can each of you clarify?
    On Mon, Sep 29, 2014 at 12:29 PM, Michael Blakeley wrote:

    Vincent, I now use a slightly different workaround. Instead of routing
    known_hosts to /dev/null I route it to a temp file. This keeps the EC2
    noise out of my default known_hosts file, and seems to play well with
    ansible.

    From my ~/.ssh/config file:
    Host *.amazonaws.com
    PasswordAuthentication no
    StrictHostKeyChecking no
    UserKnownHostsFile /tmp/ec2_known_hosts
    User ec2-user


    Hope that helps you.

    -- Mike
    On Monday, September 29, 2014 8:37:43 AM UTC-7, Vincent Janelle wrote:

    Exactly like what was described at the start of this thread. :( Setting
    the environment variable produces the desired parallel execution.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgy%3DkjtYhcZgKu6fWCDF_ODLYrUWAaKbwKmLyUzBbo%3DnQQ%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael Blakeley at Sep 29, 2014 at 4:45 pm
    I took it that Vincent was referring to my message of 2013-09-12
    <https://groups.google.com/d/msg/ansible-project/8p3XWlo83ho/Q1SflaZ9dyAJ>.
    In that post I mentioned using /dev/null for the ssh UserKnownHostsFile
    configuration key, scoped to Host *.amazonaws.com

    This configuration triggers single-threaded behavior from ansible because
    ssh never stores any record of connecting to the EC2 hosts: not the first
    time, not ever. Because known_hosts is /dev/null.

    -- Mike
    On Monday, September 29, 2014 9:30:32 AM UTC-7, Michael DeHaan wrote:

    So I'm confused - are you saying you are using known_hosts that are empty?

    This seems to be a completely unrelated question.

    The mention of /dev/null above seemed to be based on confusion that we
    didn't read it, not that it was actually symlinked to /dev/null.

    Can each of you clarify?

    On Mon, Sep 29, 2014 at 12:29 PM, Michael Blakeley <michael....@gmail.com
    <javascript:>> wrote:
    Vincent, I now use a slightly different workaround. Instead of routing
    known_hosts to /dev/null I route it to a temp file. This keeps the EC2
    noise out of my default known_hosts file, and seems to play well with
    ansible.

    From my ~/.ssh/config file:
    Host *.amazonaws.com
    PasswordAuthentication no
    StrictHostKeyChecking no
    UserKnownHostsFile /tmp/ec2_known_hosts
    User ec2-user


    Hope that helps you.

    -- Mike
    On Monday, September 29, 2014 8:37:43 AM UTC-7, Vincent Janelle wrote:

    Exactly like what was described at the start of this thread. :( Setting
    the environment variable produces the desired parallel execution.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-proje...@googlegroups.com <javascript:>.
    To post to this group, send email to ansible...@googlegroups.com
    <javascript:>.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael DeHaan at Sep 29, 2014 at 4:54 pm
    Ansible does not find your known hosts location from ~/.ssh/config on a per
    host basis and does read your ~/.ssh/known_hosts.

    It does this because it needs to know, in advance of SSH asking, whether it
    needs to lock.

    Assume it's running at 50/200 forks and needs to ask a question
    interactively, that's why it needs to know.

    So if you are saying use known_hosts in a different file, that may be
    EXACTLY the problem. With host key checking on, and the data going
    elsewhere, it can't be found, and ansible is locking pre-emptively.

    On Mon, Sep 29, 2014 at 12:45 PM, Michael Blakeley wrote:

    I took it that Vincent was referring to my message of 2013-09-12
    <https://groups.google.com/d/msg/ansible-project/8p3XWlo83ho/Q1SflaZ9dyAJ>.
    In that post I mentioned using /dev/null for the ssh UserKnownHostsFile
    configuration key, scoped to Host *.amazonaws.com

    This configuration triggers single-threaded behavior from ansible because
    ssh never stores any record of connecting to the EC2 hosts: not the first
    time, not ever. Because known_hosts is /dev/null.

    -- Mike
    On Monday, September 29, 2014 9:30:32 AM UTC-7, Michael DeHaan wrote:

    So I'm confused - are you saying you are using known_hosts that are empty?

    This seems to be a completely unrelated question.

    The mention of /dev/null above seemed to be based on confusion that we
    didn't read it, not that it was actually symlinked to /dev/null.

    Can each of you clarify?

    On Mon, Sep 29, 2014 at 12:29 PM, Michael Blakeley <michael....@gmail.com
    wrote:
    Vincent, I now use a slightly different workaround. Instead of routing
    known_hosts to /dev/null I route it to a temp file. This keeps the EC2
    noise out of my default known_hosts file, and seems to play well with
    ansible.

    From my ~/.ssh/config file:
    Host *.amazonaws.com
    PasswordAuthentication no
    StrictHostKeyChecking no
    UserKnownHostsFile /tmp/ec2_known_hosts
    User ec2-user


    Hope that helps you.

    -- Mike
    On Monday, September 29, 2014 8:37:43 AM UTC-7, Vincent Janelle wrote:

    Exactly like what was described at the start of this thread. :(
    Setting the environment variable produces the desired parallel execution.
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/550bdafe-2892-477b-9452-
    bbed389bfbce%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgx6xuJbQibon7%3DHMRnqOSTk6arpuqhPCDNv0E%2BoAqBHTg%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael DeHaan at Sep 29, 2014 at 4:57 pm
    I'm wondering if we can detect configuration of alternative known_hosts
    locations in the ~/.ssh/config and issue a warning, which should be able to
    key people in to turn off the checking feature.

    This should close this out, I'd think.


    On Mon, Sep 29, 2014 at 12:54 PM, Michael DeHaan wrote:

    Ansible does not find your known hosts location from ~/.ssh/config on a
    per host basis and does read your ~/.ssh/known_hosts.

    It does this because it needs to know, in advance of SSH asking, whether
    it needs to lock.

    Assume it's running at 50/200 forks and needs to ask a question
    interactively, that's why it needs to know.

    So if you are saying use known_hosts in a different file, that may be
    EXACTLY the problem. With host key checking on, and the data going
    elsewhere, it can't be found, and ansible is locking pre-emptively.


    On Mon, Sep 29, 2014 at 12:45 PM, Michael Blakeley <
    michael.blakeley@gmail.com> wrote:
    I took it that Vincent was referring to my message of 2013-09-12
    <https://groups.google.com/d/msg/ansible-project/8p3XWlo83ho/Q1SflaZ9dyAJ>.
    In that post I mentioned using /dev/null for the ssh UserKnownHostsFile
    configuration key, scoped to Host *.amazonaws.com

    This configuration triggers single-threaded behavior from ansible because
    ssh never stores any record of connecting to the EC2 hosts: not the first
    time, not ever. Because known_hosts is /dev/null.

    -- Mike
    On Monday, September 29, 2014 9:30:32 AM UTC-7, Michael DeHaan wrote:

    So I'm confused - are you saying you are using known_hosts that are
    empty?

    This seems to be a completely unrelated question.

    The mention of /dev/null above seemed to be based on confusion that we
    didn't read it, not that it was actually symlinked to /dev/null.

    Can each of you clarify?

    On Mon, Sep 29, 2014 at 12:29 PM, Michael Blakeley <
    michael....@gmail.com> wrote:
    Vincent, I now use a slightly different workaround. Instead of routing
    known_hosts to /dev/null I route it to a temp file. This keeps the EC2
    noise out of my default known_hosts file, and seems to play well with
    ansible.

    From my ~/.ssh/config file:
    Host *.amazonaws.com
    PasswordAuthentication no
    StrictHostKeyChecking no
    UserKnownHostsFile /tmp/ec2_known_hosts
    User ec2-user


    Hope that helps you.

    -- Mike
    On Monday, September 29, 2014 8:37:43 AM UTC-7, Vincent Janelle wrote:

    Exactly like what was described at the start of this thread. :(
    Setting the environment variable produces the desired parallel execution.
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/550bdafe-2892-477b-9452-
    bbed389bfbce%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgxWwN28hgO6dGnhx7R-a%2BFj0PDX0CsAfs%3D0kDQ3iBxy4Q%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Matt Jaynes at Nov 10, 2014 at 5:52 pm
  • Lorrin Nelson at Jun 18, 2015 at 1:39 am
    Any updates on this? I took a gander through the GitHub issues but didn't
    see one that seemed related.

    On Monday, November 10, 2014 at 9:52:43 AM UTC-8, Matt Jaynes wrote:

    Sounds like some great possible solutions.

    Either

    1) Reading the SSH config to pick up the correct known_hosts locations
    (and perhaps setting 'host_key_checking' to false if the location is
    '/dev/null' since that's a common pattern - for instance, Vagrant does this
    by default, see https://docs.vagrantup.com/v2/cli/ssh_config.html )

    or

    2) A simple warning message when serialization is triggered due to
    known_hosts in order to save folks from some really tough debugging

    Just lost a few hours debugging this issue. For several environments, I
    have a client's known_hosts locations set to custom locations in their SSH
    config, so everything was running serially (a 3 minute process * 20 servers
    = 60 minutes!). Persistence and sweat finally lead me to try
    "host_key_checking = False" and it finally ran in parallel - was so nice to
    finally see since I'd tried just about everything else I could imagine
    (forks, serial, ssh options, restructuring inventory, removing inventory
    groups, etc).

    Thanks,
    Matt
    On Monday, September 29, 2014 6:57:35 PM UTC+2, Michael DeHaan wrote:

    I'm wondering if we can detect configuration of alternative known_hosts
    locations in the ~/.ssh/config and issue a warning, which should be able to
    key people in to turn off the checking feature.

    This should close this out, I'd think.



    On Mon, Sep 29, 2014 at 12:54 PM, Michael DeHaan <mic...@ansible.com>
    wrote:
    Ansible does not find your known hosts location from ~/.ssh/config on a
    per host basis and does read your ~/.ssh/known_hosts.

    It does this because it needs to know, in advance of SSH asking, whether
    it needs to lock.

    Assume it's running at 50/200 forks and needs to ask a question
    interactively, that's why it needs to know.

    So if you are saying use known_hosts in a different file, that may be
    EXACTLY the problem. With host key checking on, and the data going
    elsewhere, it can't be found, and ansible is locking pre-emptively.


    On Mon, Sep 29, 2014 at 12:45 PM, Michael Blakeley <
    michael....@gmail.com> wrote:
    I took it that Vincent was referring to my message of 2013-09-12
    <https://groups.google.com/d/msg/ansible-project/8p3XWlo83ho/Q1SflaZ9dyAJ>.
    In that post I mentioned using /dev/null for the ssh UserKnownHostsFile
    configuration key, scoped to Host *.amazonaws.com

    This configuration triggers single-threaded behavior from ansible
    because ssh never stores any record of connecting to the EC2 hosts: not the
    first time, not ever. Because known_hosts is /dev/null.

    -- Mike
    On Monday, September 29, 2014 9:30:32 AM UTC-7, Michael DeHaan wrote:

    So I'm confused - are you saying you are using known_hosts that are
    empty?

    This seems to be a completely unrelated question.

    The mention of /dev/null above seemed to be based on confusion that we
    didn't read it, not that it was actually symlinked to /dev/null.

    Can each of you clarify?

    On Mon, Sep 29, 2014 at 12:29 PM, Michael Blakeley <
    michael....@gmail.com> wrote:
    Vincent, I now use a slightly different workaround. Instead of
    routing known_hosts to /dev/null I route it to a temp file. This keeps the
    EC2 noise out of my default known_hosts file, and seems to play well with
    ansible.

    From my ~/.ssh/config file:
    Host *.amazonaws.com
    PasswordAuthentication no
    StrictHostKeyChecking no
    UserKnownHostsFile /tmp/ec2_known_hosts
    User ec2-user


    Hope that helps you.

    -- Mike
    On Monday, September 29, 2014 8:37:43 AM UTC-7, Vincent Janelle wrote:

    Exactly like what was described at the start of this thread. :(
    Setting the environment variable produces the desired parallel execution.
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/550bdafe-2892-477b-9452-
    bbed389bfbce%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/550bdafe-2892-477b-9452-bbed389bfbce%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/aa3c8257-cb5c-40b1-94fc-051fee1748fc%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/7448bd62-9452-43ca-86bd-723ac2238ada%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Steve Ims at Oct 3, 2015 at 1:11 pm
    We got burned by this too.

    We use Ansible from a single Jenkins server to manage instances in multiple
    EC2 VPCs. We use strict host checking for security and we have a custom
    known_hosts file per VPC (we've automated updates to known_hosts on each
    deploy).

    "Reading the SSH config to pick up the correct known_hosts locations"
    (option #1 posted by Matt) seems the most intuitive solution.

    Guess we are generally spoiled by Ansible :-) Ansible fits so well into
    our workflows that we assumed it would also honor our ssh configuration.
      And in fact Ansible mostly does honor our ssh configuration because our
    playbooks and adhocs do run with the custom known_hosts -- but the silent
    impact to performance (serial, never parallel) was unexpected.

    Appreciate your work!

    -- Steve

    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/6906b847-2fe9-4fe2-bbc8-ffb870c241ea%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Michael DeHaan at Sep 29, 2014 at 4:29 pm
    Hi James,

    Each loop DOES happen within the host loop.

    If you have 50 hosts and they are "with_items"'ing, that still happens 50
    hosts at a time.




    On Mon, Sep 29, 2014 at 11:05 AM, James Cammarata wrote:

    Hi Vincent, could you share a sample of the playbook you're running as
    well as the results of running it with -f1, -f2 and -f4? That should
    determine if the playbook is indeed being serialized at some point.

    Do note, however, if you're doing something like this:

    - local_action: ec2 ...
    with_items:
    - ...
    - ...
    - ...

    you will see serialized performance. This is caused by the fact that each
    pass through with_* loops must complete on all hosts before the next loop
    begins, and with local_action you'd only be executing on a single host
    (localhost), so this would constrain the playbook to a serial-like
    performance.

    Thanks!

    On Mon, Sep 29, 2014 at 9:44 AM, Vincent Janelle <
    randomfrequency@gmail.com> wrote:
    Not sure how I'd send you a copy of /dev/null, unless ansible is
    attempting to parse the contents of ~/.ssh/known_hosts outside of ssh.
    On Monday, 29 September 2014 07:39:20 UTC-7, Michael DeHaan wrote:

    Any chance I can get a copy of your known_hosts file?

    Off list would be preferred.

    I'm not sure that's it, but I suspect it could be.



    On Mon, Sep 29, 2014 at 10:35 AM, Vincent Janelle <randomf...@gmail.com>
    wrote:
    Just an update at Michael's request - seeing the exact same situations,
    with ec2.

    Setting this environment variable fixes this.
    On Thursday, 12 September 2013 15:34:33 UTC-7, Michael Blakeley wrote:

    On Thursday, September 12, 2013 3:21:23 PM UTC-7, James Cammarata
    wrote:
    I believe the initial iteration through the hosts is single-threaded,
    as that occurs before the forks are created, however can you demonstrate
    that your configuration is causing single-threaded behavior after the forks
    are running?
    Yes, I think so. I observe single-threading for every command
    throughout long playbooks. Setting ANSIBLE_HOST_KEY_CHECKING=no resolves
    that.

    Does the output from this single command help?

    $ ansible -i ec2.py tag_Name_test -f 9 -a date
    ec2-54-200-43-114.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:33 UTC 2013
    ec2-54-200-40-223.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:35 UTC 2013
    ec2-54-200-33-219.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:36 UTC 2013
    ec2-54-200-40-249.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:38 UTC 2013
    ec2-54-200-43-44.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:40 UTC 2013
    ec2-54-200-43-42.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:42 UTC 2013
    ec2-54-200-40-224.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:41 UTC 2013
    ec2-54-200-42-181.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:43 UTC 2013
    ec2-54-200-42-164.us-west-2.compute.amazonaws.com | success | rc=0 >>
    Thu Sep 12 21:23:44 UTC 2013

    With ANSIBLE_HOST_KEY_CHECKING=no, the results return much more
    quickly and all nine hosts display the same time (within 1-2 sec anyway).
    --
    You received this message because you are subscribed to the Google
    Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to ansible-proje...@googlegroups.com.
    To post to this group, send email to ansible...@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/
    msgid/ansible-project/a828f04b-369e-4b75-acb2-
    522903aadbe0%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/a828f04b-369e-4b75-acb2-522903aadbe0%40googlegroups.com?utm_medium=email&utm_source=footer>
    .
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com
    <https://groups.google.com/d/msgid/ansible-project/feb40220-6a8b-4813-826d-5447e9f1b3a8%40googlegroups.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/ansible-project/CAMFyvFiYHSn8GF_kMNx6ea70z7RV%2ByRoGQd%2B8M-%2BSZ0AMC4%3DSQ%40mail.gmail.com
    <https://groups.google.com/d/msgid/ansible-project/CAMFyvFiYHSn8GF_kMNx6ea70z7RV%2ByRoGQd%2B8M-%2BSZ0AMC4%3DSQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
    .

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "Ansible Project" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to ansible-project+unsubscribe@googlegroups.com.
    To post to this group, send email to ansible-project@googlegroups.com.
    To view this discussion on the web visit https://groups.google.com/d/msgid/ansible-project/CA%2BnsWgwMgzbcEDKE88Bgxyg3OvUFHUWQgaCM7zs0b-kxcmTbLQ%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupansible-project @
postedSep 29, '14 at 2:37p
activeOct 3, '15 at 1:11p
posts15
users7

People

Translate

site design / logo © 2022 Grokbase