FAQ
Hi,

I run the below php code with Mongo & MongoClient to insert 100k records.
Mongo took 70-75 second to complete and MongoClient took 440-450 second.

$start = microtime(true);

$objBaseMongoRecord = new BaseMongoRecord();

$objBaseMongoRecord->setCollectionName("mongotest");

$i = 0;
while ($i < 1000000) {

     $data = array(
         "name" => array(
             "firstname" => "raju" . $i,
             "lastname" => "bishnoi" . $i),
         "address" => array(
             "street" => "raju" . $i,
             "city" => "city" . $i,
             "state" => "state" . $i,
             "country" => "country" . $i,
             "zipcode" => "1" . $i),
         "officephone" => "25412541",
         "homephone" => "625412541",
         "status" => "A",
         "date" => date('Y-m-d:H-i-s'),
         "time" => time());

     $objBaseMongoRecord->ensureIndex(array("time" => 1));
     $objBaseMongoRecord->insert($data);
     $i++;
}

$duration = microtime(true) - $start;
// print in the format 1.2345 for better reading
printf("took %0.4d seconds", $duration);

Can anyone tell me why MongoClient tooks more time and how to make it fast.

Thanks
Raju

--
You received this message because you are subscribed to the Google
Groups "mongodb-user" group.
To post to this group, send email to mongodb-user@googlegroups.com
To unsubscribe from this group, send email to
mongodb-user+unsubscribe@googlegroups.com
See also the IRC channel -- freenode.net#mongodb

Search Discussions

  • Sam Millman at Jan 10, 2013 at 9:06 am
    Mongo() by default does a fire and forget while MongoClient() does a safe
    write.

    You can look here to an explanation of the differences:
    http://php.net/manual/en/mongo.writeconcerns.php

    Basically safe writes by default were introduced with MongoClient.
    On 10 January 2013 04:13, Raju Bishnoi wrote:

    000
    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Derick Rethans at Jan 10, 2013 at 10:20 am

    On Thu, 10 Jan 2013, Sam Millman wrote:

    Mongo() by default does a fire and forget while MongoClient() does a safe
    write.

    You can look here to an explanation of the differences:
    http://php.net/manual/en/mongo.writeconcerns.php

    Basically safe writes by default were introduced with MongoClient.
    Yes, but that should not account for *that* much of a change. I'll also
    comment on the original post.

    cheers,
    Derick

    --
    {
       website: [ "http://mongodb.org", "http://derickrethans.nl" ],
       twitter: [ "@derickr", "@mongodb" ]
    }

    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Derick Rethans at Jan 10, 2013 at 10:24 am

    On Wed, 9 Jan 2013, Raju Bishnoi wrote:

    I run the below php code with Mongo & MongoClient to insert 100k
    records. Mongo took 70-75 second to complete and MongoClient took
    440-450 second.

    $start = microtime(true);

    $objBaseMongoRecord = new BaseMongoRecord();

    $objBaseMongoRecord->setCollectionName("mongotest");
    I'm not quite sure what setCollectionName and $objBaseMongoRecord are.
    They're not part of the driver.
    $i = 0;
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());

    $objBaseMongoRecord->ensureIndex(array("time" => 1));
    Why are you calling ensureIndex in each iteration of the loop? That
    combined with the writeAcknowledged inserts that come with MongoClient
    will definitely slow it down.
    $objBaseMongoRecord->insert($data);
    $i++;
    }

    $duration = microtime(true) - $start;
    // print in the format 1.2345 for better reading
    printf("took %0.4d seconds", $duration);

    Can anyone tell me why MongoClient tooks more time and how to make it fast.
    1. Don't call ensureIndex() in every iteration
    2. If that *doesn't* help a lot, please read
        http://drck.me/mongoclient-9mq and perhaps change your MongoClient
        initialisation to resemble something like:

        $m = new MongoClient("mongodb://localhost/?w=0");

    cheers,
    Derick

    --
    {
       website: [ "http://mongodb.org", "http://derickrethans.nl" ],
       twitter: [ "@derickr", "@mongodb" ]
    }

    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Sam Millman at Jan 10, 2013 at 11:14 am
    true 6x slowness shouldn't be created by acked write, wrote my response a
    little early in the morning there.

    On 10 January 2013 10:24, Derick Rethans wrote:
    On Wed, 9 Jan 2013, Raju Bishnoi wrote:

    I run the below php code with Mongo & MongoClient to insert 100k
    records. Mongo took 70-75 second to complete and MongoClient took
    440-450 second.

    $start = microtime(true);

    $objBaseMongoRecord = new BaseMongoRecord();

    $objBaseMongoRecord->setCollectionName("mongotest");
    I'm not quite sure what setCollectionName and $objBaseMongoRecord are.
    They're not part of the driver.
    $i = 0;
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());

    $objBaseMongoRecord->ensureIndex(array("time" => 1));
    Why are you calling ensureIndex in each iteration of the loop? That
    combined with the writeAcknowledged inserts that come with MongoClient
    will definitely slow it down.
    $objBaseMongoRecord->insert($data);
    $i++;
    }

    $duration = microtime(true) - $start;
    // print in the format 1.2345 for better reading
    printf("took %0.4d seconds", $duration);

    Can anyone tell me why MongoClient tooks more time and how to make it
    fast.

    1. Don't call ensureIndex() in every iteration
    2. If that *doesn't* help a lot, please read
    http://drck.me/mongoclient-9mq and perhaps change your MongoClient
    initialisation to resemble something like:

    $m = new MongoClient("mongodb://localhost/?w=0");

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }

    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Raju Bishnoi at Jan 10, 2013 at 11:31 am
    Hi Derick & Sammaye,

    Thanks for your response.
    I have two more question.

    1 ) As we know mongodb does not support joins. so what method i have to use
    for db reference... there are two refrences.
    If i go with Dbref then it will add $ref field in every row like below

    {
         $id: {$oid : "4f4603820e25f4c515000001"},
         title: "User group",
         users: [
             {_id: {$ref: "users", $id: { $oid: "4f44af6a024342300e000002"}} }
         ]
    }

    instead of DBref can i use simple reference as below

    {
         $id: {$oid : "4f4603820e25f4c515000001"},
         title: "User group",
         users: { $oid: "4f44af6a024342300e000002"}
    }

    2. Is there any data limit for collection, because i am running the above code in my localhost which get inserted sucessfully 3436655 records.
    If i run the script again then it's not inserting the records.

    Thanks
    Raju


    On Thursday, 10 January 2013 16:44:35 UTC+5:30, Sammaye wrote:

    true 6x slowness shouldn't be created by acked write, wrote my response a
    little early in the morning there.


    On 10 January 2013 10:24, Derick Rethans <der...@10gen.com <javascript:>>wrote:
    On Wed, 9 Jan 2013, Raju Bishnoi wrote:

    I run the below php code with Mongo & MongoClient to insert 100k
    records. Mongo took 70-75 second to complete and MongoClient took
    440-450 second.

    $start = microtime(true);

    $objBaseMongoRecord = new BaseMongoRecord();

    $objBaseMongoRecord->setCollectionName("mongotest");
    I'm not quite sure what setCollectionName and $objBaseMongoRecord are.
    They're not part of the driver.
    $i = 0;
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());

    $objBaseMongoRecord->ensureIndex(array("time" => 1));
    Why are you calling ensureIndex in each iteration of the loop? That
    combined with the writeAcknowledged inserts that come with MongoClient
    will definitely slow it down.
    $objBaseMongoRecord->insert($data);
    $i++;
    }

    $duration = microtime(true) - $start;
    // print in the format 1.2345 for better reading
    printf("took %0.4d seconds", $duration);

    Can anyone tell me why MongoClient tooks more time and how to make it
    fast.

    1. Don't call ensureIndex() in every iteration
    2. If that *doesn't* help a lot, please read
    http://drck.me/mongoclient-9mq and perhaps change your MongoClient
    initialisation to resemble something like:

    $m = new MongoClient("mongodb://localhost/?w=0");

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }

    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongod...@googlegroups.com<javascript:>
    To unsubscribe from this group, send email to
    mongodb-user...@googlegroups.com <javascript:>
    See also the IRC channel -- freenode.net#mongodb
    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Derick Rethans at Jan 10, 2013 at 11:39 am

    On Thu, 10 Jan 2013, Raju Bishnoi wrote:

    Hi Derick & Sammaye,

    Thanks for your response.
    I have two more question.

    1 ) As we know mongodb does not support joins. so what method i have to use
    for db reference... there are two refrences.
    If i go with Dbref then it will add $ref field in every row like below

    {
    $id: {$oid : "4f4603820e25f4c515000001"},
    title: "User group",
    users: [
    {_id: {$ref: "users", $id: { $oid: "4f44af6a024342300e000002"}} }
    ]
    }

    instead of DBref can i use simple reference as below

    {
    $id: {$oid : "4f4603820e25f4c515000001"},
    title: "User group",
    users: { $oid: "4f44af6a024342300e000002"}
    }
    Both work. Personally I've never used DBRefs and as you are quering
    yourself anyway, there is no real need for those.
    2. Is there any data limit for collection, because i am running the
    above code in my localhost which get inserted sucessfully 3436655
    records.

    If i run the script again then it's not inserting the records.
    There should be no limits (besides diskspace). If you use MongoClient
    with acknowledged writes the driver will tell you why it couldn't
    insert. It might just be that you're reusing the same _id?

    cheers,
    Derick

    --
    {
       website: [ "http://mongodb.org", "http://derickrethans.nl" ],
       twitter: [ "@derickr", "@mongodb" ]
    }

    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Raju Bishnoi at Jan 10, 2013 at 12:11 pm
    Getting below error

    Fatal error: Uncaught exception 'MongoCursorException' with message
    'localhost:27017: can't map file memory - mongo requires 64 bit build for
    larger datasets
    On Thursday, 10 January 2013 17:09:08 UTC+5:30, Derick Rethans wrote:
    On Thu, 10 Jan 2013, Raju Bishnoi wrote:

    Hi Derick & Sammaye,

    Thanks for your response.
    I have two more question.

    1 ) As we know mongodb does not support joins. so what method i have to use
    for db reference... there are two refrences.
    If i go with Dbref then it will add $ref field in every row like below

    {
    $id: {$oid : "4f4603820e25f4c515000001"},
    title: "User group",
    users: [
    {_id: {$ref: "users", $id: { $oid: "4f44af6a024342300e000002"}} }
    ]
    }

    instead of DBref can i use simple reference as below

    {
    $id: {$oid : "4f4603820e25f4c515000001"},
    title: "User group",
    users: { $oid: "4f44af6a024342300e000002"}
    }
    Both work. Personally I've never used DBRefs and as you are quering
    yourself anyway, there is no real need for those.
    2. Is there any data limit for collection, because i am running the
    above code in my localhost which get inserted sucessfully 3436655
    records.

    If i run the script again then it's not inserting the records.
    There should be no limits (besides diskspace). If you use MongoClient
    with acknowledged writes the driver will tell you why it couldn't
    insert. It might just be that you're reusing the same _id?

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }
    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Derick Rethans at Jan 10, 2013 at 12:35 pm

    On Thu, 10 Jan 2013, Raju Bishnoi wrote:

    Getting below error

    Fatal error: Uncaught exception 'MongoCursorException' with message
    'localhost:27017: can't map file memory - mongo requires 64 bit build for
    larger datasets
    There you go. You have run out of the 2GB memory space that is linked to
    32bit builds of MongoDB. YOu will need to download and start a 64-bit
    version that you can find here: http://www.mongodb.org/downloads -
    however, to run a 64-bit build, you need a 64-bit OS as well.

    cheers,
    Derick

    --
    {
       website: [ "http://mongodb.org", "http://derickrethans.nl" ],
       twitter: [ "@derickr", "@mongodb" ]
    }

    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Raju Bishnoi at Jan 11, 2013 at 11:59 am
    Okay

    On production server(64bit) getting error in batch insert.

    Fatal error: Uncaught exception 'MongoException' with message 'insert too
    large: 31322271, max: 16000000'

    ~Raju
    On Thursday, 10 January 2013 18:05:09 UTC+5:30, Derick Rethans wrote:
    On Thu, 10 Jan 2013, Raju Bishnoi wrote:

    Getting below error

    Fatal error: Uncaught exception 'MongoCursorException' with message
    'localhost:27017: can't map file memory - mongo requires 64 bit build for
    larger datasets
    There you go. You have run out of the 2GB memory space that is linked to
    32bit builds of MongoDB. YOu will need to download and start a 64-bit
    version that you can find here: http://www.mongodb.org/downloads -
    however, to run a 64-bit build, you need a 64-bit OS as well.

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }
    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Derick Rethans at Jan 11, 2013 at 12:20 pm

    On Fri, 11 Jan 2013, Raju Bishnoi wrote:

    Okay

    On production server(64bit) getting error in batch insert.

    Fatal error: Uncaught exception 'MongoException' with message 'insert too
    large: 31322271, max: 16000000'
    The maximum document size is 16mb, and you're trying to send 31mb. You
    will have to split it up in smaller batches.

    cheers,
    Derick

    --
    {
       website: [ "http://mongodb.org", "http://derickrethans.nl" ],
       twitter: [ "@derickr", "@mongodb" ]
    }

    --
    You received this message because you are subscribed to the Google
    Groups "mongodb-user" group.
    To post to this group, send email to mongodb-user@googlegroups.com
    To unsubscribe from this group, send email to
    mongodb-user+unsubscribe@googlegroups.com
    See also the IRC channel -- freenode.net#mongodb
  • Mr Drbeaver at Oct 17, 2014 at 11:51 pm
    How does the splitting it up into smaller batches fix an error regarding
    the document size? If any of his documents are too large, wouldn't he get
    the same error regardless if he uses batchInsert in smaller chunks or just
    a plain old insert? I ask because I am getting this same error but when I
    split my batch into smaller chunks it works. I've always been under the
    impression that there is a separate batch size limit and different
    MongoException if that limit is exceeded
    On Friday, January 11, 2013 4:19:58 AM UTC-8, Derick Rethans wrote:
    On Fri, 11 Jan 2013, Raju Bishnoi wrote:

    Okay

    On production server(64bit) getting error in batch insert.

    Fatal error: Uncaught exception 'MongoException' with message 'insert too
    large: 31322271, max: 16000000'
    The maximum document size is 16mb, and you're trying to send 31mb. You
    will have to split it up in smaller batches.

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }
    --
    You received this message because you are subscribed to the Google Groups "mongodb-user"
    group.

    For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/036d145b-a5f8-4482-80f6-0efd618a1f34%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Asya Kamsky at Oct 20, 2014 at 2:44 am
    What's the error that you are getting?

    Also, what version are you using? The message you quoted is from
    January 2013 (almost two years ago) so it's likely a very different
    environment.

    Asya

    On Fri, Oct 17, 2014 at 7:44 PM, wrote:
    How does the splitting it up into smaller batches fix an error regarding the
    document size? If any of his documents are too large, wouldn't he get the
    same error regardless if he uses batchInsert in smaller chunks or just a
    plain old insert? I ask because I am getting this same error but when I
    split my batch into smaller chunks it works. I've always been under the
    impression that there is a separate batch size limit and different
    MongoException if that limit is exceeded
    On Friday, January 11, 2013 4:19:58 AM UTC-8, Derick Rethans wrote:
    On Fri, 11 Jan 2013, Raju Bishnoi wrote:

    Okay

    On production server(64bit) getting error in batch insert.

    Fatal error: Uncaught exception 'MongoException' with message 'insert
    too
    large: 31322271, max: 16000000'
    The maximum document size is 16mb, and you're trying to send 31mb. You
    will have to split it up in smaller batches.

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }
    --
    You received this message because you are subscribed to the Google Groups
    "mongodb-user"
    group.

    For other MongoDB technical support options, see:
    http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups
    "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/mongodb-user/036d145b-a5f8-4482-80f6-0efd618a1f34%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "mongodb-user"
    group.

    For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJAb%2BOCi2YEQHrkE0syLcaq4qV4yBQLw9CZqY08wUEjDbw%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mr Drbeaver at Oct 20, 2014 at 5:22 pm
    I am getting the same error, "'MongoException' with message 'insert too
    large: 30678524, max: 16000000'" using version 2.4.4. I just don't
    understand how the batch size affects an error regarding the document size.
    On Sunday, October 19, 2014 7:45:58 PM UTC-7, Asya Kamsky wrote:

    What's the error that you are getting?

    Also, what version are you using? The message you quoted is from
    January 2013 (almost two years ago) so it's likely a very different
    environment.

    Asya


    On Fri, Oct 17, 2014 at 7:44 PM, <mr.dr...@gmail.com <javascript:>>
    wrote:
    How does the splitting it up into smaller batches fix an error regarding the
    document size? If any of his documents are too large, wouldn't he get the
    same error regardless if he uses batchInsert in smaller chunks or just a
    plain old insert? I ask because I am getting this same error but when I
    split my batch into smaller chunks it works. I've always been under the
    impression that there is a separate batch size limit and different
    MongoException if that limit is exceeded
    On Friday, January 11, 2013 4:19:58 AM UTC-8, Derick Rethans wrote:
    On Fri, 11 Jan 2013, Raju Bishnoi wrote:

    Okay

    On production server(64bit) getting error in batch insert.

    Fatal error: Uncaught exception 'MongoException' with message 'insert
    too
    large: 31322271, max: 16000000'
    The maximum document size is 16mb, and you're trying to send 31mb. You
    will have to split it up in smaller batches.

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }
    --
    You received this message because you are subscribed to the Google Groups
    "mongodb-user"
    group.

    For other MongoDB technical support options, see:
    http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups
    "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to mongodb-user...@googlegroups.com <javascript:>.
    To post to this group, send email to mongod...@googlegroups.com
    <javascript:>.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/mongodb-user/036d145b-a5f8-4482-80f6-0efd618a1f34%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "mongodb-user"
    group.

    For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/e60e1241-93a2-473c-8ca7-a7321abb3d1f%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Asya Kamsky at Oct 22, 2014 at 7:54 pm
    The way driver and server communicate is using BSON documents on the wire.

    Asya

    On Mon, Oct 20, 2014 at 1:22 PM, wrote:
    I am getting the same error, "'MongoException' with message 'insert too
    large: 30678524, max: 16000000'" using version 2.4.4. I just don't
    understand how the batch size affects an error regarding the document size.
    On Sunday, October 19, 2014 7:45:58 PM UTC-7, Asya Kamsky wrote:

    What's the error that you are getting?

    Also, what version are you using? The message you quoted is from
    January 2013 (almost two years ago) so it's likely a very different
    environment.

    Asya

    On Fri, Oct 17, 2014 at 7:44 PM, wrote:
    How does the splitting it up into smaller batches fix an error regarding
    the
    document size? If any of his documents are too large, wouldn't he get
    the
    same error regardless if he uses batchInsert in smaller chunks or just a
    plain old insert? I ask because I am getting this same error but when I
    split my batch into smaller chunks it works. I've always been under the
    impression that there is a separate batch size limit and different
    MongoException if that limit is exceeded
    On Friday, January 11, 2013 4:19:58 AM UTC-8, Derick Rethans wrote:
    On Fri, 11 Jan 2013, Raju Bishnoi wrote:

    Okay

    On production server(64bit) getting error in batch insert.

    Fatal error: Uncaught exception 'MongoException' with message 'insert
    too
    large: 31322271, max: 16000000'
    The maximum document size is 16mb, and you're trying to send 31mb. You
    will have to split it up in smaller batches.

    cheers,
    Derick

    --
    {
    website: [ "http://mongodb.org", "http://derickrethans.nl" ],
    twitter: [ "@derickr", "@mongodb" ]
    }
    --
    You received this message because you are subscribed to the Google
    Groups
    "mongodb-user"
    group.

    For other MongoDB technical support options, see:
    http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google
    Groups
    "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an
    email to mongodb-user...@googlegroups.com.
    To post to this group, send email to mongod...@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit

    https://groups.google.com/d/msgid/mongodb-user/036d145b-a5f8-4482-80f6-0efd618a1f34%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups
    "mongodb-user"
    group.

    For other MongoDB technical support options, see:
    http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups
    "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/mongodb-user/e60e1241-93a2-473c-8ca7-a7321abb3d1f%40googlegroups.com.

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "mongodb-user"
    group.

    For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJDdoimotccE5npu93N-vcr1QvD5oSQN7wO%3D5zApRz9z3g%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.
  • Ashish Shetty at Oct 21, 2014 at 6:01 am
    when you need to insert a large amount of documents it is better to do a
    bulk insert ....look into it create a array of your documents and insert
    them into mongo.
    ie. use array_push(array,)
    $full_data=array();
    while ($i < 1000000) {

         $data = array(
             "name" => array(
                 "firstname" => "raju" . $i,
                 "lastname" => "bishnoi" . $i),
             "address" => array(
                 "street" => "raju" . $i,
                 "city" => "city" . $i,
                 "state" => "state" . $i,
                 "country" => "country" . $i,
                 "zipcode" => "1" . $i),
             "officephone" => "25412541",
             "homephone" => "625412541",
             "status" => "A",
             "date" => date('Y-m-d:H-i-s'),
             "time" => time());
            array_push($full_data,$data)

         $i++;
    }

         $objBaseMongoRecord->insert($full_data);
    On Thursday, January 10, 2013 9:43:29 AM UTC+5:30, Raju Bishnoi wrote:

    Hi,

    I run the below php code with Mongo & MongoClient to insert 100k records.
    Mongo took 70-75 second to complete and MongoClient took 440-450 second.

    $start = microtime(true);

    $objBaseMongoRecord = new BaseMongoRecord();

    $objBaseMongoRecord->setCollectionName("mongotest");

    $i = 0;
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());

    $objBaseMongoRecord->ensureIndex(array("time" => 1));
    $objBaseMongoRecord->insert($data);
    $i++;
    }

    $duration = microtime(true) - $start;
    // print in the format 1.2345 for better reading
    printf("took %0.4d seconds", $duration);

    Can anyone tell me why MongoClient tooks more time and how to make it fast.

    Thanks
    Raju
    --
    You received this message because you are subscribed to the Google Groups "mongodb-user"
    group.

    For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/8e99e756-1077-4363-ab20-71bc995bb7b7%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Mr Drbeaver at Oct 21, 2014 at 5:28 pm
    I am using the bulkInsert method. The error I posted was thrown while
    doing a bulkInsert. However, when I reduce the number of documents I want
    to insert I don't get the "'MongoException' with message 'insert too large:
    30678524, max: 16000000'". Why does reducing the number of documents I
    bulk insert get rid of this error. It seems that if any of the documents
    are too big, I would get this error regardless of how many other documents
    I insert. Why does changing the amount I bulk insert affect the document
    size limit?
    On Monday, October 20, 2014 11:01:34 PM UTC-7, Ashish Shetty wrote:


    when you need to insert a large amount of documents it is better to do a
    bulk insert ....look into it create a array of your documents and insert
    them into mongo.
    ie. use array_push(array,)
    $full_data=array();
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());
    array_push($full_data,$data)

    $i++;
    }

    $objBaseMongoRecord->insert($full_data);
    On Thursday, January 10, 2013 9:43:29 AM UTC+5:30, Raju Bishnoi wrote:

    Hi,

    I run the below php code with Mongo & MongoClient to insert 100k records.
    Mongo took 70-75 second to complete and MongoClient took 440-450 second.

    $start = microtime(true);

    $objBaseMongoRecord = new BaseMongoRecord();

    $objBaseMongoRecord->setCollectionName("mongotest");

    $i = 0;
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());

    $objBaseMongoRecord->ensureIndex(array("time" => 1));
    $objBaseMongoRecord->insert($data);
    $i++;
    }

    $duration = microtime(true) - $start;
    // print in the format 1.2345 for better reading
    printf("took %0.4d seconds", $duration);

    Can anyone tell me why MongoClient tooks more time and how to make it
    fast.

    Thanks
    Raju
    --
    You received this message because you are subscribed to the Google Groups "mongodb-user"
    group.

    For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/d59275b3-9830-4844-a581-f87cbeaafd26%40googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Asya Kamsky at Oct 22, 2014 at 7:46 pm
    What bulkInsert method? What version of MongoDB? What driver? What
    version of the driver?

    What exactly does your code look like?

    Asya

    On Tue, Oct 21, 2014 at 1:28 PM, wrote:
    I am using the bulkInsert method. The error I posted was thrown while doing
    a bulkInsert. However, when I reduce the number of documents I want to
    insert I don't get the "'MongoException' with message 'insert too large:
    30678524, max: 16000000'". Why does reducing the number of documents I bulk
    insert get rid of this error. It seems that if any of the documents are too
    big, I would get this error regardless of how many other documents I insert.
    Why does changing the amount I bulk insert affect the document size limit?

    On Monday, October 20, 2014 11:01:34 PM UTC-7, Ashish Shetty wrote:


    when you need to insert a large amount of documents it is better to do a
    bulk insert ....look into it create a array of your documents and insert
    them into mongo.
    ie. use array_push(array,)
    $full_data=array();
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());
    array_push($full_data,$data)

    $i++;
    }

    $objBaseMongoRecord->insert($full_data);
    On Thursday, January 10, 2013 9:43:29 AM UTC+5:30, Raju Bishnoi wrote:

    Hi,

    I run the below php code with Mongo & MongoClient to insert 100k records.
    Mongo took 70-75 second to complete and MongoClient took 440-450 second.

    $start = microtime(true);

    $objBaseMongoRecord = new BaseMongoRecord();

    $objBaseMongoRecord->setCollectionName("mongotest");

    $i = 0;
    while ($i < 1000000) {

    $data = array(
    "name" => array(
    "firstname" => "raju" . $i,
    "lastname" => "bishnoi" . $i),
    "address" => array(
    "street" => "raju" . $i,
    "city" => "city" . $i,
    "state" => "state" . $i,
    "country" => "country" . $i,
    "zipcode" => "1" . $i),
    "officephone" => "25412541",
    "homephone" => "625412541",
    "status" => "A",
    "date" => date('Y-m-d:H-i-s'),
    "time" => time());

    $objBaseMongoRecord->ensureIndex(array("time" => 1));
    $objBaseMongoRecord->insert($data);
    $i++;
    }

    $duration = microtime(true) - $start;
    // print in the format 1.2345 for better reading
    printf("took %0.4d seconds", $duration);

    Can anyone tell me why MongoClient tooks more time and how to make it
    fast.

    Thanks
    Raju
    --
    You received this message because you are subscribed to the Google Groups
    "mongodb-user"
    group.

    For other MongoDB technical support options, see:
    http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups
    "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit
    https://groups.google.com/d/msgid/mongodb-user/d59275b3-9830-4844-a581-f87cbeaafd26%40googlegroups.com.

    For more options, visit https://groups.google.com/d/optout.
    --
    You received this message because you are subscribed to the Google Groups "mongodb-user"
    group.

    For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
    ---
    You received this message because you are subscribed to the Google Groups "mongodb-user" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user+unsubscribe@googlegroups.com.
    To post to this group, send email to mongodb-user@googlegroups.com.
    Visit this group at http://groups.google.com/group/mongodb-user.
    To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/CAOe6dJCK8HSxGPYVu1y%2BHHz0A6gErsptk226aXM1%3D9uQn-hAEg%40mail.gmail.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupmongodb-user @
categoriesmongodb
postedJan 10, '13 at 4:13a
activeOct 22, '14 at 7:54p
posts18
users6
websitemongodb.org
irc#mongodb

People

Translate

site design / logo © 2022 Grokbase