FAQ
Hi Lists
According to the Doc as below:
*Oracle® Database Application Developer's Guide - Large Objects
10g Release 2 (10.2)*
Part Number B14249-01

In Chapter 4 there are 2 graph:
1. LOB values are stored out-of-line when any of the following situations
apply:

*By default. That is, if you do not specify a LOB parameter for the LOB
storage clause when you create the table.*

2. ENABLE or DISABLE STORAGE IN ROW Clause

You use the ENABLE | DISABLE STORAGE IN ROW clause to indicate whether the
LOB should be stored inline (in the row) or out-of-line.

Note:
You may not alter this specification once you have made it: if you ENABLE
STORAGE IN ROW, then you cannot alter it to DISABLE STORAGE IN ROW and vice
versa.
*The default is ENABLE STORAGE IN ROW.*

Document error? Which one is the TRUTH? or I misunderstanding something?

Based on our test, even every LOB has only 2080bytes(less than 4000bytes),
LOB is stored OUT of line, but dba_lobs.in_row=YES.

--
Kamus <kamusis@gmail.com>

Visit my blog for more : http://www.dbform.com
Join ACOUG: http://www.acoug.org

--
http://www.freelists.org/webpage/oracle-l

Search Discussions

  • Du shenglin at Apr 21, 2012 at 9:16 pm
    I think the dba_lobs.in_row can tell you about enable/disable in row by
    default when creating table.
    If you use UTF8, LOB will use double size than varchar2 because the
    multiple charact sets.(i am not sure if it's good to say as this).
    Tanel Poder have a good document about LOB 'LOB Internals and Performance
    Tuning' , you can search it on google
    Thanks
    Shenglin
    On Sat, Apr 21, 2012 at 11:34 PM, Leyi Kamus Zhang wrote:

    Hi Lists
    According to the Doc as below:
    *Oracle® Database Application Developer's Guide - Large Objects
    10g Release 2 (10.2)*
    Part Number B14249-01

    In Chapter 4 there are 2 graph:
    1. LOB values are stored out-of-line when any of the following situations
    apply:

    *By default. That is, if you do not specify a LOB parameter for the LOB
    storage clause when you create the table.*

    2. ENABLE or DISABLE STORAGE IN ROW Clause

    You use the ENABLE | DISABLE STORAGE IN ROW clause to indicate whether the
    LOB should be stored inline (in the row) or out-of-line.

    Note:
    You may not alter this specification once you have made it: if you ENABLE
    STORAGE IN ROW, then you cannot alter it to DISABLE STORAGE IN ROW and vice
    versa.
    *The default is ENABLE STORAGE IN ROW.*

    Document error? Which one is the TRUTH? or I misunderstanding something?

    Based on our test, even every LOB has only 2080bytes(less than 4000bytes),
    LOB is stored OUT of line, but dba_lobs.in_row=YES.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org

    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Louis at Apr 22, 2012 at 2:50 am
    Data in CLOB columns is stored in a format that is compatible with UCS-2
    when the database character set is multibyte, such as UTF8 or AL32UTF8.
    This means that the storage space required for an English document doubles
    when the data is converted. Storage for an Asian language document in
    a CLOB column
    requires less storage space than the same document in a LONG column using
    UTF8, typically around 30% less, depending on the contents of the document.
    ÔÚ 2012Äê4ÔÂ21ÈÕ ÏÂÎç11:34£¬Leyi Kamus Zhang <kamusis@gmail.com>дµÀ£º
    Hi Lists
    According to the Doc as below:
    *Oracle(R) Database Application Developer's Guide - Large Objects
    10g Release 2 (10.2)*
    Part Number B14249-01

    In Chapter 4 there are 2 graph£º
    1. LOB values are stored out-of-line when any of the following situations
    apply:

    *By default. That is, if you do not specify a LOB parameter for the LOB
    storage clause when you create the table.*

    2. ENABLE or DISABLE STORAGE IN ROW Clause

    You use the ENABLE | DISABLE STORAGE IN ROW clause to indicate whether the
    LOB should be stored inline (in the row) or out-of-line.

    Note:
    You may not alter this specification once you have made it: if you ENABLE
    STORAGE IN ROW, then you cannot alter it to DISABLE STORAGE IN ROW and vice
    versa.
    *The default is ENABLE STORAGE IN ROW.*

    Document error? Which one is the TRUTH? or I misunderstanding something?

    Based on our test, even every LOB has only 2080bytes(less than 4000bytes),
    LOB is stored OUT of line, but dba_lobs.in_row=YES.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org

    --
    http://www.freelists.org/webpage/oracle-l


    --
    Phone: +86 13918046970
    Email & Gtalk: ylouis83@gmail.com
    Personal Blog: http://www.vmcd.org

    --
    http://www.freelists.org/webpage/oracle-l
  • Leyi Kamus Zhang at Apr 22, 2012 at 9:22 am
    Hi Louis
    Yes, you're correct, I used dbms_lob.getlenghth to get the CLOB column
    length, and my database character set is ZHS16GBK, so the length 2080 I
    mentioned in original mail should require 4160 bytes, it will store out of
    line.

    It seemed the 1st paragraph in the doc I mentioned is wrong? By default,
    LOB will store in line, except it's length large than 4000 bytes or
    explicitly using DISABLE STORAGE IN ROW.

    I'll check the Tanel's presentation. Thanks for your info.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org


    2012/4/22 louis <ylouis83@gmail.com>
    Data in CLOB columns is stored in a format that is compatible with UCS-2
    when the database character set is multibyte, such as UTF8 or AL32UTF8.
    This means that the storage space required for an English document doubles
    when the data is converted. Storage for an Asian language document in a
    CLOB column requires less storage space than the same document in a LONG column
    using UTF8, typically around 30% less, depending on the contents of the
    document.

    在 2012年4月21日 下午11:34,Leyi Kamus Zhang <kamusis@gmail.com>写道:
    Hi Lists
    According to the Doc as below:
    *Oracle® Database Application Developer's Guide - Large Objects
    10g Release 2 (10.2)*

    Part Number B14249-01

    In Chapter 4 there are 2 graph:
    1. LOB values are stored out-of-line when any of the following situations
    apply:

    *By default. That is, if you do not specify a LOB parameter for the LOB
    storage clause when you create the table.*


    2. ENABLE or DISABLE STORAGE IN ROW Clause

    You use the ENABLE | DISABLE STORAGE IN ROW clause to indicate whether the
    LOB should be stored inline (in the row) or out-of-line.

    Note:
    You may not alter this specification once you have made it: if you ENABLE
    STORAGE IN ROW, then you cannot alter it to DISABLE STORAGE IN ROW and
    vice
    versa.
    *The default is ENABLE STORAGE IN ROW.*


    Document error? Which one is the TRUTH? or I misunderstanding something?

    Based on our test, even every LOB has only 2080bytes(less than 4000bytes),
    LOB is stored OUT of line, but dba_lobs.in_row=YES.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org

    --
    http://www.freelists.org/webpage/oracle-l


    --
    Phone: +86 13918046970
    Email & Gtalk: ylouis83@gmail.com
    Personal Blog: http://www.vmcd.org
    --
    http://www.freelists.org/webpage/oracle-l
  • Tanel Poder at Apr 22, 2012 at 3:09 pm
    If you create the LOB column with DISABLE STORAGE IN ROW, then the LOB data
    is always stored out-of-line. LOB index is always used. Only the LOB ID is
    stored inline, and the ID is looked up in LOB index, where you'll get the
    pointers to actual LOB chunk blocks.
    If you create the LOB column with ENABLE STORAGE IN ROW, then the LOB data
    may be stored in-line or out-of-line.

    If the total LOB data + overhead <= 4000 bytes, then the LOB item will be
    stored in-line. No LOB index is used, even if you modify the LOB later on
    as everything is stored in-line with the row and versioning/rollback is
    achieved with undo data.

    If the total LOB data + overhead > 4000 bytes, then the LOB item will be
    stored out-of-line. If the LOB fits into 12 x LOB_chunk_size, then no LOB
    index entries are created, because the in-line LOB locator can store up to
    12 pointers to the LOB chunk blocks for each lob item. So if your LOB chunk
    size is 8kB, you can store LOB items up to 96kB in size without inserting
    anything to LOB index. However if the LOB item is bigger, then no pointers
    are stored in-row and all pointers will be put to the LOB index.

    Note that once you modify an existing LOB item (which is bigger than 4000
    bytes with its overhead), but smaller than 12 x chunk_size, then LOB index
    will still be used after the first LOB change operation as pointers to the
    old LOB chunk versions have to be stored in it (LOB segments don't rely on
    undo for rollback & consistency, but just use LOB chunk versioning managed
    by LOB index).

    The "overhead" of an in-line LOB item is 36 bytes, so the actual LOB data
    must be 4000 - 36 = 3964 bytes or less in order to fully fit in-row.

    --
    Tanel Poder
    Enkitec Europe
    http://www.enkitec.com/
    Advanced Oracle Troubleshooting v2.0 Seminars in May/June 2012!
    http://blog.tanelpoder.com/seminar/

    On Sun, Apr 22, 2012 at 12:21 PM, Leyi Kamus Zhang wrote:

    Hi Louis
    Yes, you're correct, I used dbms_lob.getlenghth to get the CLOB column
    length, and my database character set is ZHS16GBK, so the length 2080 I
    mentioned in original mail should require 4160 bytes, it will store out of
    line.

    It seemed the 1st paragraph in the doc I mentioned is wrong? By default,
    LOB will store in line, except it's length large than 4000 bytes or
    explicitly using DISABLE STORAGE IN ROW.

    I'll check the Tanel's presentation. Thanks for your info.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org


    2012/4/22 louis <ylouis83@gmail.com>
    Data in CLOB columns is stored in a format that is compatible with UCS-2
    when the database character set is multibyte, such as UTF8 or AL32UTF8.
    This means that the storage space required for an English document doubles
    when the data is converted. Storage for an Asian language document in a
    CLOB column requires less storage space than the same document in a LONG column
    using UTF8, typically around 30% less, depending on the contents of the
    document.

    在 2012年4月21日 下午11:34,Leyi Kamus Zhang <kamusis@gmail.com>写道:
    Hi Lists
    According to the Doc as below:
    *Oracle® Database Application Developer's Guide - Large Objects
    10g Release 2 (10.2)*

    Part Number B14249-01

    In Chapter 4 there are 2 graph:
    1. LOB values are stored out-of-line when any of the following
    situations
    apply:

    *By default. That is, if you do not specify a LOB parameter for the LOB
    storage clause when you create the table.*


    2. ENABLE or DISABLE STORAGE IN ROW Clause

    You use the ENABLE | DISABLE STORAGE IN ROW clause to indicate whether
    the
    LOB should be stored inline (in the row) or out-of-line.

    Note:
    You may not alter this specification once you have made it: if you
    ENABLE
    STORAGE IN ROW, then you cannot alter it to DISABLE STORAGE IN ROW and
    vice
    versa.
    *The default is ENABLE STORAGE IN ROW.*


    Document error? Which one is the TRUTH? or I misunderstanding something?

    Based on our test, even every LOB has only 2080bytes(less than
    4000bytes),
    LOB is stored OUT of line, but dba_lobs.in_row=YES.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org

    --
    http://www.freelists.org/webpage/oracle-l


    --
    Phone: +86 13918046970
    Email & Gtalk: ylouis83@gmail.com
    Personal Blog: http://www.vmcd.org
    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Tanel Poder at Apr 22, 2012 at 4:18 pm
    (reposting due to overquoting)
    Yeah, LOBs over DBlinks are a problem. As every LOB item will require a
    SQL*Net roundtrip ... thus there's a lot of idle time on both ends of the
    dblink due to network latency. I don't think that even doing datapump *over
    dblinks* changes this as it's how LOB access over OPI works (their data has
    to be retrieved with a separate OPI call - it isn't bundled in the regular
    row stream).

    So unless you want to go with other approaches (like datapump'ing to a NFS
    share and mounting it on the target server too) then you'll just need to
    start tens or hundreds of connections, each fetching their own subset of
    data.

    You might end up with various contention issues (on LOB segments example),
    especially if you don't have partitioned tables in the target system, but
    it's matter of testing & benchmarking as usual.

    Tanel.
    On Sun, Apr 22, 2012 at 7:05 PM, Leyi Kamus Zhang wrote:

    Hi Tanel

    Thanks for your reply, I'm reading your ppt on slideshare, perfect one.

    One more question for BLOB performance, I knew you and Randy write the
    "Expert Oracle Exadata" Chaper 13, it's for migration to Exadata. Now I'm
    doing a huge database migration, not to Exadata, just from an Oracle
    database to another, in my test, we can only achieve 10MB/s when CTAS a
    table with BLOB, and only 25MB/s when CTAS a normal table without LOB
    columns(database link in 1000Mbps NIC), I'm wonder how fast the speed you
    can achieve?

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org


    --
    http://www.freelists.org/webpage/oracle-l
  • Fairlie rego at Apr 23, 2012 at 11:11 am
    I am doing similar migrations to exadata where I had to move 13 TB of data from a 10.2.0.4 databases to an X2-2 on 11.2.0.2
    90% of data was LOBS but thankfully most of it was is read only and partitioned by month
    Approach 1
    ----------
    Datapump export of LOBS was too slow and took more than 40 hours to export a 500GB partition

    Approach 2
    ---------
    Insert over a dblink by having 12 concurrent scheduler jobs each working on discrete data (as outlined by Tanel below) took less than 9 hours.

    We used approach 2 and could not use more concurrent sessions because the source is a production system and hence we couldn't go crazy...


    To reduce downtime for copy of the active readwrite portion we used GoldenGate and the downtime was less than 1 hour

    Thanks



    Fairlie Rego
    Senior Oracle Consultant
    http://el-caro.blogspot.com/
    M: +61 402 792 405



    ________________________________
    From: Tanel Poder <tanel@tanelpoder.com>
    To: Leyi Kamus Zhang <kamusis@gmail.com>
    Cc: louis <ylouis83@gmail.com>; oracle-l@freelists.org
    Sent: Monday, 23 April 2012 2:17 AM
    Subject: Re: By default, LOB store in line or out of line?

    (reposting due to overquoting)
    Yeah, LOBs over DBlinks are a problem. As every LOB item will require a
    SQL*Net roundtrip ... thus there's a lot of idle time on both ends of the
    dblink due to network latency. I don't think that even doing datapump *over
    dblinks* changes this as it's how LOB access over OPI works (their data has
    to be retrieved with a separate OPI call - it isn't bundled in the regular
    row stream).

    So unless you want to go with other approaches (like datapump'ing to a NFS
    share and mounting it on the target server too) then you'll just need to
    start tens or hundreds of connections, each fetching their own subset of
    data.

    You might end up with various contention issues (on LOB segments example),
    especially if you don't have partitioned tables in the target system, but
    it's matter of testing & benchmarking as usual.

    Tanel.
    On Sun, Apr 22, 2012 at 7:05 PM, Leyi Kamus Zhang wrote:

    Hi Tanel

    Thanks for your reply, I'm reading your ppt on slideshare, perfect one.

    One more question for BLOB performance, I knew you and Randy write the
    "Expert Oracle Exadata" Chaper 13, it's for migration to Exadata. Now I'm
    doing a huge database migration, not to Exadata, just from an Oracle
    database to another, in my test, we can only achieve 10MB/s when CTAS a
    table with BLOB, and only 25MB/s when CTAS a normal table without LOB
    columns(database link in 1000Mbps NIC), I'm wonder how fast the speed you
    can achieve?

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org


    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Leyi Kamus Zhang at Apr 23, 2012 at 2:22 pm
    Hi, Fairlie
    Our test is with 8 concurrent sessions CTAS over dblink, we transfered 7TB
    LOB in 64 hours with "as of scn" flashback query from source database.

    Yours 12 concurrent jobs in 9 hours transfered 500GB data or 13TB? I also
    tested insert /+*append parallel*/, but the result is far worse than CTAS.
    Can you share your test result? Did you test CTAS?

    Our migration is painful... the source database has a single table over
    7TB, almost LOB, no partition, actively inserted and updated, we have to
    use WHERE condition to split the source table into little pieces and have
    to use flashback query to get the consistency. The worst experience was
    after 20 hours CTAS we got ORA-01555 error, even we have set the UNDO
    RETENTION for LOB to 48 hours.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org

    On Mon, Apr 23, 2012 at 7:10 PM, fairlie rego wrote:

    Approach 2
    ---------
    Insert over a dblink by having 12 concurrent scheduler jobs each working
    on discrete data (as outlined by Tanel below) took less than 9 hours.

    --
    http://www.freelists.org/webpage/oracle-l
  • Du shenglin at Apr 24, 2012 at 12:58 am
    I have the similar painful case to migrate from LONG(most are chain row) to
    LOB. the bottleneck for us is the 'db file sequential read' with undo. Once
    CTAS started to read UNDO, it will became worse and worse. Even read one
    data block, it will need much more read of undo data. Finally, it failed
    with ORA-01555.
    Now, we decided to export from physical standby database. (open read only)
    if so, we don't need to the undo read.
    on primary, we created the trigger to record the changed PK values. Once
    finishing the export/import, we will use script to sync up data based on
    these PK data
    Thanks
    Shenglin


    On Mon, Apr 23, 2012 at 10:17 PM, Leyi Kamus Zhang wrote:

    Hi, Fairlie
    Our test is with 8 concurrent sessions CTAS over dblink, we transfered 7TB
    LOB in 64 hours with "as of scn" flashback query from source database.

    Yours 12 concurrent jobs in 9 hours transfered 500GB data or 13TB? I also
    tested insert /+*append parallel*/, but the result is far worse than CTAS.
    Can you share your test result? Did you test CTAS?

    Our migration is painful... the source database has a single table over
    7TB, almost LOB, no partition, actively inserted and updated, we have to
    use WHERE condition to split the source table into little pieces and have
    to use flashback query to get the consistency. The worst experience was
    after 20 hours CTAS we got ORA-01555 error, even we have set the UNDO
    RETENTION for LOB to 48 hours.

    --
    Kamus <kamusis@gmail.com>

    Visit my blog for more : http://www.dbform.com
    Join ACOUG: http://www.acoug.org

    On Mon, Apr 23, 2012 at 7:10 PM, fairlie rego wrote:

    Approach 2
    ---------
    Insert over a dblink by having 12 concurrent scheduler jobs each working
    on discrete data (as outlined by Tanel below) took less than 9 hours.

    --
    http://www.freelists.org/webpage/oracle-l


    --
    http://www.freelists.org/webpage/oracle-l
  • Yong Huang at Apr 24, 2012 at 6:25 pm
    (Hijacking this thread!)

    I always tell developers to not use certain data types unless it's absolutely needed. They like to use CLOB even if the likelihood the text will exceed 4000 chars is almost zero. Some like to use CHAR thinking it's faster and more space-efficient than VARCAHR2. Wrong data types add unnecesary burden on DBAs.

    Yong Huang

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedApr 21, '12 at 3:35p
activeApr 24, '12 at 6:25p
posts10
users6
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase