I went through the same exercise and here is my version (relying on
On Friday, April 27, 2012 6:32:27 PM UTC-7, Kyle Lemons wrote:

Ah, yes, my mistake. I would create an io.Pipe and give the read end to
the request's Body and use the write end on the io.Copy.

On Fri, Apr 27, 2012 at 6:25 PM, James Lyons <james...@gmail.com<javascript:>
If I was writing a server, then perhaps. However I'm attempting a
multipart form, POST upload. I'm not trying to handle the reception
of such a request.
There is no "requestwriter" that I have found.

On Fri, Apr 27, 2012 at 6:04 PM, Kyle Lemons <kev...@google.com<javascript:>>
Perhaps I'm missing something, but it seems like:

file := os.Open("filename.txt")
defer file.Close()
parts := multipart.NewWriter(rw) // responsewriter
defer parts.Close()
filePart := parts.NewFilePart("filename")
io.Copy(filePart, file)

would be the behavior you want.

On Fri, Apr 27, 2012 at 3:59 PM, James Lyons <james...@gmail.com<javascript:>>
I don't understand how that is helpful in this context. It depends of
course on how you set things up.
io.Copy will write to a byte.Buffer using ReadFrom, or WriteTo, which
will happen instantly. So if you're trying to setup
a reader to use as your body input to http.Post() one might choose to
use a buffer. The problem with that is it reads all the data from the
file to the buffer. Because it calls Write, (or ReadFrom, WriteTo,
depending -- but a buffer implements those as instant read/write) in
the implementation. I suspect that initial choice of basic writer
implementation may be the problem, any other suggestions here? But if
not i'm basing this on the behavior bytes.Buffer.


So I don't see how io.Copy buffers anything at all in this context.
But i'm curious if i'm missing something.

Then there is the use of multipart. I used CreateFilePart -- which
just is a convinence wrapper for CreatePart()
This returns a writer, but the writer which you can use to write the
part -- but not in a deferred way. If you call write on it, it will
pass that through to the underlying writer implementation (in this
case a bytes.Buffer) and actually write the data (file contents) to
that buffer.


So, what you'd like is for there to be a... "append" method on the
writer returned by CreatePart() -- that would let you add a reader
(file) without reading its contents to the underlying buffer. And
then Close would call the same thing, to close the Part by "appending"
the close string to the reader that forme the part.

This can all be accomplished with multireader, and then not till the
calls to io.Copy that happen deeper in the call stack for http.Post()
will the file contents actually be read. Which is the desired

On Fri, Apr 27, 2012 at 2:30 PM, Kyle Lemons <kev...@google.com<javascript:>>
On Fri, Apr 27, 2012 at 1:31 PM, James Lyons <james...@gmail.com<javascript:>
@Kyle -- Well, I wanted to use the multipart writer, but I didn't
an interface that would let me pass a file to it. The multipart
writer formats the Part, and then lets you write data to the Part
(using the returned writer) and then finally lets you close the part
by closing the writer. Which is great when what you have is a
smallish amount of data to post. But when its a file (potentially
large) you want to use something like multi-reader to defer the
reading of things till the http library wants to write to the
to the bufio that it wraps the socket with.

io.Copy will write only as fast as it's being read by the client,
and a
deferred close cleans up nicely.
Which it doesn't do for
you. So I had to replicate much of what it would do, to then let me
use mulireaders to accomplish what I want. (logical concatenation
without reading a file) It would be nice if there was a version of
multipart writer that worked more like this, because often what you
want to do is stick a whole file in the Part, and there isn't any
reason to read it all.

@Paddy -- you'll notice in the first email I sent I did something
similar to your current approach. Which for small files is fine,
if you have a big one, its really annoying. Plus there is a
performance hit (of a sort) in that you have to copy the data once
a buffer from disk, and then from the buffer to the socket buffer at
the end. The extra copy is all in memory however, so i'm not sure
would show up except under the heaviest of loads. I care more about
stability though, and potentially running out of memory on a large
file just feels bad. So I like this better -- it feels like it
be nominally more performant, and quite a bit more stable.

On Fri, Apr 27, 2012 at 1:15 PM, Kyle Lemons <kev...@google.com<javascript:>>
What happened to multipart writer? When I looked at the code, it
appear to buffer anything, and it does the MIME encoding for you.

On Fri, Apr 27, 2012 at 1:06 PM, James Lyons <james...@gmail.com<javascript:>
So... For those golang experts out there, I'm sure this was
obvious, but incase there is anyone else out there struggling
learning whats available in the standard library, io.MultiReader
*very* useful in this context.

The following is a very simple POST of file data to a web service
expects files to arrive in multipart form uploads, and does so
reading the file data unnecessarily.

package main

import (

func postFile(filename string, target_url string)
body_buf := bytes.NewBufferString("")
body_writer := multipart.NewWriter(body_buf)

// use the body_writer to write the Part headers to the buffer
_, err := body_writer.CreateFormFile("upfile", filename)
if err != nil {
fmt.Println("error writing to buffer")
return nil, err

// the file data will be the second part of the body
fh, err := os.Open(filename)
if err != nil {
fmt.Println("error opening file")
return nil, err
// need to know the boundary to properly close the part myself.
boundary := body_writer.Boundary()
close_string := fmt.Sprintf("\r\n--%s--\r\n", boundary)
close_buf := bytes.NewBufferString(fmt.Sprintf("\r\n--%s--\r\n",

// use multi-reader to defer the reading of the file data until
writing to the socket buffer.
request_reader := io.MultiReader(body_buf, fh, close_buf)
fi, err := fh.Stat()
if err != nil {
fmt.Printf("Error Stating file: %s", filename)
return nil, err
req, err := http.NewRequest("POST", target_url, request_reader)
if err != nil {
return nil, err

// Set headers for multipart, and Content Length
req.Header.Add("Content-Type", "multipart/form-data; boundary="
req.ContentLength =

return http.DefaultClient.Do(req)

// sample usage
func main() {
target_url := "http://localhost:8888/"
filename := "/path/to/file.rtf"
postFile(filename, target_url)

On Thu, Apr 26, 2012 at 9:29 AM, James Lyons <james...@gmail.com<javascript:>
So I spent some time trying to code up a little upload
function to
send files up to a server. The server expects a form upload,
would get from using this html in the browser:
<form method='POST' enctype='multipart/form-data'
File to upload: <input type=file name=upfile><br>
<input type=submit value=Press> to upload the file!

I couldn't find a pre-packaged way to do this, so I came up
following simple approach:

package main

import (

func main() {
target_url := "http://localhost:8888/"
body_buf := bytes.NewBufferString("")
body_writer := multipart.NewWriter(body_buf)
filename := "/path/to/file.rtf"
file_writer, err := body_writer.CreateFormFile("upfile",
if err != nil {
fmt.Println("error writing to buffer")
fh, err := os.Open(filename)
if err != nil {
fmt.Println("error opening file")
io.Copy(file_writer, fh)
http.Post(target_url, "bad/mime", body_buf)

Aside from not doing mimetype setting -- this "works". My fear
however is that for large files io.Copy is going to write the
contents of the file into the buffer, before I can send it with
http.Post. For small files this isn't a concern. But for
(4GB+ lets say) thats a bunch of memory. I'm wondering if
is a
way to have a buffer type that has a notion of "maximum size
before I
start using disk" so that for large files it would store the
disk and not chew memory. I feel like there should be some way
i'm missing) to do this using a bufio or something, but I
coming up with anything. Its unfortunate that I have to copy
to the buffer for the request, and then send the whole request
wire. Perhaps I should create a reader/writer where you can
a reader to a set of readers -- and when one returns EOF it
reading the next.... So you could append a file, to a buffer
reading the contents -- until you're actually "reading" the
thing to copy to the socket.

Anyone have experience with this?

You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
postedJul 3, '13 at 12:23a
activeJul 3, '13 at 12:23a

1 user in discussion

Matt Aimonetti: 1 post



site design / logo © 2021 Grokbase