that sometimes does not transfer fully. The upshot is when puppet goes to
install the file (in this case it's a .deb pkg) dpkg complains with:
Debug: Executing '/usr/bin/dpkg --force-confold -i
Error: Execution of '/usr/bin/dpkg --force-confold -i
/usr/src/jre_1.7.0_amd64.deb' returned 1: (Reading database ... 37263 files
and directories currently installed.)
Unpacking jre (from /usr/src/jre_1.7.0_amd64.deb) ...
dpkg-deb (subprocess): short read on buffer copy for failed to write to
pipe in copy
dpkg-deb: error: subprocess paste returned error exit status 2
dpkg: error processing /usr/src/jre_1.7.0_amd64.deb (--install):
short read on buffer copy for backend dpkg-deb during
Errors were encountered while processing:
Like I mentioned, the file should be 45Mb; however I've seen it end up with
varying sizes, anywhere from 3Mb to 35Mb. This really is quite baffling.
Shouldn't there be some kind of checksum to ensure that the size/contents
of the file is in tact? I am running version 3.1.0 on both puppet-master
and puppet agent(s).
I've looked into running puppet behind Apache Passenger or even just
serving up the files from Apache or NGINX. Is this a known issue with
puppet's built-in web server? This is affecting our deploys and am looking
for a solution ASAP.
You received this message because you are subscribed to the Google Groups "Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To post to this group, send email to firstname.lastname@example.org.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.