FAQ
Is there a quick way to drop all objects in a schema without dropping
the user itself?


This is with Oracle 10.

Search Discussions

  • Dunbar, Norman at Nov 19, 2009 at 11:03 am
    Morning John,

    I tend to do something like the following (typed off top of head - there
    may be typos!)

    begin
    FOR x IN (SELECT table_name FROM user_tables) loop
    execute immediate 'drop table ' || x.table_name || ' cascade
    constraints purge';
    end loop;
    end;
    /

    begin
    FOR x IN (SELECT object_type, object_name FROM user_objects

    WHERE object_type NOT IN ('PACKAGE BODY','UNKNOWN','DATABASE
    LINK')) loop

    execute immediate 'drop ' || x.object_type || ' ' || x.object_name;
    end loop;
    end;
    /

    The reason I don't drop database links is purely because we use the
    above code to drop a test schema before refreshing it from live. We do
    not want the database links to be recreated pointing at whatever other
    production databases we use! We want to keep them pointing at test
    databases.

    We don't drop the schema because that messes up privs granted TO the
    schema from other schemas that we are not refreshing. Grants made FROM
    the test schema are refreshed when we import, so that's not a worry.

    HTH

    Cheers,
    Norm.

    Norman Dunbar
    Contract Oracle DBA
    CIS Engineering Services
    Internal : 7 28 2051
    External : 0113 231 2051

    Information in this message may be confidential and may be legally privileged. If you have received this message by mistake, please notify the sender immediately, delete it and do not copy it to anyone else. We have checked this email and its attachments for viruses. But you should still check any attachment before opening it. We may have to make this message and any reply to it public if asked to under the Freedom of Information Act, Data Protection Act or for litigation. Email messages and attachments sent to or from any Environment Agency address may also be accessed by someone other than the sender or recipient, for business purposes. If we have sent you information and you wish to use it please read our terms and conditions which you can get by calling us on 08708 506 506. Find out more about the Environment Agency at www.environment-agency.gov.uk

    Information in this message may be confidential and may be legally privileged. If you have received this message by mistake, please notify the sender immediately, delete it and do not copy it to anyone else.

    We have checked this email and its attachments for viruses. But you should still check any attachment before opening it.
    We may have to make this message and any reply to it public if asked to under the Freedom of Information Act, Data Protection Act or for litigation. Email messages and attachments sent to or from any Environment Agency address may also be accessed by someone other than the sender or recipient, for business purposes.

    If we have sent you information and you wish to use it please read our terms and conditions which you can get by calling us on 08708 506 506. Find out more about the Environment Agency at www.environment-agency.gov.uk
  • Anonymous at Nov 19, 2009 at 4:48 pm
    Hi guys,

    I have a question from a friend, he was told from his manager to give
    privs to common users to make exports and imports in production (in all
    databases I have under control we don't allow users to do this even in
    development).

    We are thinking about advantages and disadvantages on this approach,
    right now we are concerned on disadvantages:

    EXP:

    Users will be able to take export of everything they want, including
    huge tables (50M rows or more).
    If they do this locally (not sure if they have access) they may use
    all disk space.
    If they do this remotely, a huge export may take hours to complete.
    Performance may be affected for a huge export?

    IMP:

    Data may get duplicated easily on tables w/o referential integrity and
    lack of control on imp operations.
    Users will be able to do exp and imp for tables they have granted
    accces by using exp_full_database and imp_full_database, including sys
    and system, check dangerous behaviour:

    $ sqlplus "myuser"/mypass

    SQL >insert into system.x values (3);
    insert into system.x values (3)

    *

    ERROR at line 1:
    ORA-01031: insufficient privileges

    SQL >insert into other_user.x values (3);
    insert into other_user.x values (3)

    *

    ERROR at line 1:
    ORA-01031: insufficient privileges

    SQL >exit

    $ sqlplus / as sysdba

    SQL >grant imp_full_Database to "myuser";

    Grant succeeded.

    SQL >exit

    $ sqlplus "myuser"/mypass

    SQL >insert into other_user.x values (3);

    1 row created.

    SQL >insert into system.x values (3);

    1 row created.



    Is there any other disadvantage you can think of?

    TIA

    Regards,

    Gabriel
  • Steve Baldwin at Nov 19, 2009 at 7:18 pm
    Hi John,

    If you are using Oracle TYPE's, you need to have a few extra bits. Here's
    the script I use (we don't use DB Links) ...

    declare

    l_n_del pls_integer;
    l_n_failed pls_integer;
    l_sql varchar2 (2000);

    begin

    for i in 1 .. 2 loop

    l_n_del := 0;
    l_n_failed := 0;
    for rec in (select * from dba_objects where owner = upper(:schema)

    and object_type in ('PACKAGE', 'SEQUENCE', 'TABLE', 'TYPE', 'VIEW',
    'SYNONYM', 'FUNCTION', 'PROCEDURE')) loop

    l_sql := 'drop ' || rec.object_type || ' ' || :schema || '.' ||
    rec.object_name || (case rec.object_type when 'TABLE' then ' CASCADE
    CONSTRAINTS' else '' end);

    begin
    execute immediate l_sql;
    l_n_del := l_n_del + 1;
    exception
    when others then
    dbms_output.put_line ('Failed : ' || l_sql);
    dbms_output.put_line (' ' || sqlerrm);
    l_n_failed := l_n_failed + 1;
    end;
    end loop;
    dbms_output.put_line ('On pass ' || i || ' I dropped ' || l_n_del ||
    ' objects. Failed to drop ' || l_n_failed || ' objects');
    exit when l_n_failed = 0;

    end loop;
    end;

    On Thu, Nov 19, 2009 at 10:03 PM, Dunbar, Norman <
    norman.dunbar_at_environment-agency.gov.uk> wrote:
    Morning John,

    I tend to do something like the following (typed off top of head - there
    may be typos!)

    begin
    FOR x IN (SELECT table_name FROM user_tables) loop
    execute immediate 'drop table ' || x.table_name || ' cascade
    constraints purge';
    end loop;
    end;
    /

    begin
    FOR x IN (SELECT object_type, object_name FROM user_objects
    WHERE object_type NOT IN ('PACKAGE BODY','UNKNOWN','DATABASE
    LINK')) loop
    execute immediate 'drop ' || x.object_type || ' ' || x.object_name;
    end loop;
    end;
    /

    The reason I don't drop database links is purely because we use the
    above code to drop a test schema before refreshing it from live. We do
    not want the database links to be recreated pointing at whatever other
    production databases we use! We want to keep them pointing at test
    databases.

    We don't drop the schema because that messes up privs granted TO the
    schema from other schemas that we are not refreshing. Grants made FROM
    the test schema are refreshed when we import, so that's not a worry.

    HTH
    Cheers,
    Norm.


    Norman Dunbar
    Contract Oracle DBA
    CIS Engineering Services
    Internal : 7 28 2051
    External : 0113 231 2051

    --
    http://www.freelists.org/webpage/oracle-l
  • Michael Fontana at Nov 19, 2009 at 8:49 pm
    It's not often a problem, but dropping objects in this manner can be inefficient ( I can recall waiting on numerous fet$ deletes) and could be difficult to recover from should it fail.

    Dropping a user "cascade" utilizes Oracle code and will be consistent AND complete from release to release. Writing a script such as this may not work when new object types or behaviors come into existence in Oracle 12, 13 etc.

    I also don't like the idea of passively leaving links or other objects assumed to be valid in a schema; why not just recreate needed objects properly, and exclude the rest. This avoids the chance that a new one was created in the source, but unwanted in the target. It's also more secure.

    I would certainly be in favor of first cloning an existing user as a new one, and then dropping the old, unless resources are unavailable to take such action.
  • Chet justice at Nov 19, 2009 at 9:53 pm
    I too prefer a DROP USER CASCADE, but for sandbox purposes...I typically use
    this:

    http://www.oraclenerd.com/2009/09/how-to-clean-your-schema.html

    The only object types I haven't taken account for (yet) are Queues.

    My script will remove scheduled jobs (as well as programs and chains) and
    regular old DBMS_JOBs.

    On Thu, Nov 19, 2009 at 3:49 PM, Michael Fontana <
    michael.fontana_at_enkitec.com> wrote:
    It's not often a problem, but dropping objects in this manner can be
    inefficient ( I can recall waiting on numerous fet$ deletes) and could be
    difficult to recover from should it fail.

    Dropping a user "cascade" utilizes Oracle code and will be consistent AND
    complete from release to release. Writing a script such as this may not
    work when new object types or behaviors come into existence in Oracle 12, 13
    etc.

    I also don't like the idea of passively leaving links or other objects
    assumed to be valid in a schema; why not just recreate needed objects
    properly, and exclude the rest. This avoids the chance that a new one was
    created in the source, but unwanted in the target. It's also more secure.

    I would certainly be in favor of first cloning an existing user as a new
    one, and then dropping the old, unless resources are unavailable to take
    such action.



    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Jared Still at Nov 19, 2009 at 8:31 pm
    I tend to look at this much the same as deleting a large number
    of rows from a table - it's much more efficient to create a table
    with the wanted rows, drop the old table and rename the new one.

    By the same token, it is much easier to just drop the account
    and recreate it.

    The attached script dup_user.sql will generate a script that is used
    to recreate an account.

    clears.sql and clear_for_spool.sql are just supporting scripts.

    One thing that dup_user.sql does not do is deal with Java privs.

    The attached script java_privs.sql will display the Java privs.

    Or should you feel like updating dup_user.sql ...

    Jared Still
    Certifiable Oracle DBA and Part Time Perl Evangelist
    Oracle Blog: http://jkstill.blogspot.com
    Home Page: http://jaredstill.com
    On Thu, Nov 19, 2009 at 1:25 AM, John Dunn wrote:

    Is there a quick way to drop all objects in a schema without dropping the
    user itself?

    This is with Oracle 10.



    **
    --
    http://www.freelists.org/webpage/oracle-l

    application/octet-stream attachment: clear_for_spool.sql

    application/octet-stream attachment: clears.sql

    application/octet-stream attachment: dup_user.sql

    application/octet-stream attachment: java_privs.sql
  • Steve montgomerie at Nov 19, 2009 at 10:24 pm
    We do a little of both where we drop tables and views through a cursor
    and then drop user cascade. I've found this to be about twice as fast
    as a simple drop user.

    It's peoplesoft so we have about 23,000 tables, 25,000 indexes. Takes
    about 45 mins for us.

    Steve
    On Thu, Nov 19, 2009 at 4:25 AM, John Dunn wrote:
    Is there a quick way to drop all objects in a schema without dropping the
    user itself?

    This is with  Oracle 10.


    --
    http://www.freelists.org/webpage/oracle-l
  • Mark W. Farnham at Nov 19, 2009 at 10:41 pm
    I'd wager somewhere between a nickel and a donut that you can additionally
    speed it up if you drop the constraints first, then the indexes, then the
    tables, and then, as you say, drop user cascade for completeness. Now while
    I have not run a speed test since before we had local extent management and
    the I doubt the difference is still as much it was with dictionary
    management. It probably only matters with local extent management when you
    have huge numbers of tables and indexes. A big part of the old advantage
    (meaning dictionary managed) was if you could take the tablespaces off line
    for the drop, avoiding the "pecimal" (my made up antonym for optimal)
    algorithm to juggle putting bits back on fet$ when others could be grabbing
    bits from fet$ for the same tablespace. But it also relieves Oracle of a lot
    of recursive sql elbow grease to figure out what order to drop things in the
    cascade. If you whack the constraints, then the indexes, and then the
    tables, that leaves a lot less for Oracle to figure out. I wouldn't be
    surprised if it about cut that in half again. (but that wasn't a guess Alex
    - I just said I wouldn't be surprised.) If you try it, please let us know
    whether I'm right.

    mwf

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of steve montgomerie
    Sent: Thursday, November 19, 2009 5:25 PM
    To: JDunn_at_sefas.com
    Cc: oracle-l@freelists.org
    Subject: Re: Quick way to drop all objects in a schema

    We do a little of both where we drop tables and views through a cursor
    and then drop user cascade. I've found this to be about twice as fast
    as a simple drop user.

    It's peoplesoft so we have about 23,000 tables, 25,000 indexes. Takes
    about 45 mins for us.

    Steve
    On Thu, Nov 19, 2009 at 4:25 AM, John Dunn wrote:
    Is there a quick way to drop all objects in a schema without dropping the
    user itself?

    This is with  Oracle 10.


    --
    http://www.freelists.org/webpage/oracle-l

    --
    http://www.freelists.org/webpage/oracle-l
  • Go-Faster Consultancy Ltd. at Nov 19, 2009 at 11:19 pm
    There are no referential integrity constraints in PeopleSoft.
    There no primary key constraints in PeopleSoft - they create unique indexes
    instead.
    Every character and numeric column and most date column are not nullable -
    so lots of NOT NULL constraints.

    If you use Unicode on application v8, character columns are still defined
    with byte semantics, so PeopleSoft puts a length checking constraint on each
    and every character column - as well as causing a huge parse overhead, this
    will make drop table commands very slow. Only from Application v9 do they
    use character semantics.

    Most of the tens of thousands of tables and indexes come from the multiple
    'non-shared' tables created for each PeopleSoft temporary record (these are
    permanent tables used for temporary working storage during batch programs -
    different concurrent instances of the same program use different tables).
    If you take the time and trouble to set the number of temporary table
    instances on an Application Engine to the actual number of concurrent
    instances of the program that you will run, you can significantly reduce the
    number of tables that you have to deal with. However, almost nobody does
    this, because it is a lot of work.

    regards

    David Kurtz
    Go-Faster Consultancy Ltd.
    tel: +44 (0)7771 760660
    fax: +44 (0)7092 348865
    mailto:david.kurtz_at_go-faster.co.uk
    web: www.go-faster.co.uk
    Book: PeopleSoft for the Oracle DBA: http://www.psftdba.com
    DBA Blogs: PeopleSoft: http://blog.psftdba.com, Oracle:
    http://blog.go-faster.co.uk
    PeopleSoft DBA Forum: http://groups.yahoo.com/group/psftdba
    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Mark W. Farnham
    Sent: Thursday, November 19, 2009 10:42 PM
    To: stmontgo_at_gmail.com; JDunn_at_sefas.com
    Cc: oracle-l@freelists.org
    Subject: RE: Quick way to drop all objects in a schema
    Importance: High

    I'd wager somewhere between a nickel and a donut that you can
    additionally speed it up if you drop the constraints first,
    then the indexes, then the tables, and then, as you say, drop
    user cascade for completeness. Now while I have not run a
    speed test since before we had local extent management and the
    I doubt the difference is still as much it was with dictionary
    management. It probably only matters with local extent
    management when you have huge numbers of tables and indexes. A
    big part of the old advantage (meaning dictionary managed) was
    if you could take the tablespaces off line for the drop,
    avoiding the "pecimal" (my made up antonym for optimal)
    algorithm to juggle putting bits back on fet$ when others
    could be grabbing bits from fet$ for the same tablespace. But
    it also relieves Oracle of a lot of recursive sql elbow grease
    to figure out what order to drop things in the cascade. If you
    whack the constraints, then the indexes, and then the tables,
    that leaves a lot less for Oracle to figure out. I wouldn't be
    surprised if it about cut that in half again. (but that wasn't
    a guess Alex
    - I just said I wouldn't be surprised.) If you try it, please
    let us know whether I'm right.

    mwf

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org

    On Behalf Of steve montgomerie
    Sent: Thursday, November 19, 2009 5:25 PM
    To: JDunn_at_sefas.com
    Cc: oracle-l@freelists.org
    Subject: Re: Quick way to drop all objects in a schema

    We do a little of both where we drop tables and views through
    a cursor and then drop user cascade. I've found this to be
    about twice as fast as a simple drop user.

    It's peoplesoft so we have about 23,000 tables, 25,000
    indexes. Takes about 45 mins for us.

    Steve
    On Thu, Nov 19, 2009 at 4:25 AM, John Dunn wrote:
    Is there a quick way to drop all objects in a schema without dropping
    the user itself?

    This is with  Oracle 10.


    --
    http://www.freelists.org/webpage/oracle-l




    --
    http://www.freelists.org/webpage/oracle-l
    --
    http://www.freelists.org/webpage/oracle-l
  • Japplewhite_at_austinisd.org at Nov 20, 2009 at 12:47 am
    When folks mention how many tables and indexes their apps have, I get a
    kind of perverse pleasure in topping them with what we deal with. Our
    Student Information System produces a complete set of the tables it uses
    for each School AND for each SchoolYear! With about 186 tables in each
    base set, about 189 Schools, and 9 SchoolYears of data, our Prod database
    has 187,800 tables and 295,713 indexes.

    That App also uses no RI constraints, just unique indexes. We generate
    export, drop, etc. parameter files and lists with SQL scripts that write
    SQL scripts. So, to drop and replace a SchoolYear set of tables in our
    Dev database with the set from Prod, we generate a script with about
    25,000 - 30,000 Drop Table statements for Dev and an export *.par file for
    Prod with about the same number of tables in the Tables= list. It's all
    scripted and cron'd so painless for us, though the DBs work very hard for
    a long time.

    Fortunately, we're getting a new SIS next year - with only one set of
    tables - so we probably won't break the 200,000 table count.

    Jack C. Applewhite - Database Administrator
    Austin I.S.D. - MIS Department
    512.414.9715 (wk) / 512.935.5929 (pager)

    From:
    "Go-Faster Consultancy Ltd."
    To:

    Cc:

    Date:
    11/19/2009 05:20 PM
    Subject:
    RE: Quick way to drop all objects in a schema
    Sent by:
    oracle-l-bounce_at_freelists.org

    There are no referential integrity constraints in PeopleSoft.
    There no primary key constraints in PeopleSoft - they create unique
    indexes
    instead.
    Every character and numeric column and most date column are not nullable -
    so lots of NOT NULL constraints.

    If you use Unicode on application v8, character columns are still defined
    with byte semantics, so PeopleSoft puts a length checking constraint on
    each
    and every character column - as well as causing a huge parse overhead,
    this
    will make drop table commands very slow. Only from Application v9 do they
    use character semantics.

    Most of the tens of thousands of tables and indexes come from the multiple
    'non-shared' tables created for each PeopleSoft temporary record (these
    are
    permanent tables used for temporary working storage during batch programs
    -
    different concurrent instances of the same program use different tables).
    If you take the time and trouble to set the number of temporary table
    instances on an Application Engine to the actual number of concurrent
    instances of the program that you will run, you can significantly reduce
    the
    number of tables that you have to deal with. However, almost nobody does
    this, because it is a lot of work.

    regards

    David Kurtz
    Go-Faster Consultancy Ltd.
    tel: +44 (0)7771 760660
    fax: +44 (0)7092 348865
    mailto:david.kurtz_at_go-faster.co.uk
    web: www.go-faster.co.uk
    Book: PeopleSoft for the Oracle DBA: http://www.psftdba.com
    DBA Blogs: PeopleSoft: http://blog.psftdba.com, Oracle:
    http://blog.go-faster.co.uk
    PeopleSoft DBA Forum: http://groups.yahoo.com/group/psftdba
    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org
    On Behalf Of Mark W. Farnham
    Sent: Thursday, November 19, 2009 10:42 PM
    To: stmontgo_at_gmail.com; JDunn_at_sefas.com
    Cc: oracle-l@freelists.org
    Subject: RE: Quick way to drop all objects in a schema
    Importance: High

    I'd wager somewhere between a nickel and a donut that you can
    additionally speed it up if you drop the constraints first,
    then the indexes, then the tables, and then, as you say, drop
    user cascade for completeness. Now while I have not run a
    speed test since before we had local extent management and the
    I doubt the difference is still as much it was with dictionary
    management. It probably only matters with local extent
    management when you have huge numbers of tables and indexes. A
    big part of the old advantage (meaning dictionary managed) was
    if you could take the tablespaces off line for the drop,
    avoiding the "pecimal" (my made up antonym for optimal)
    algorithm to juggle putting bits back on fet$ when others
    could be grabbing bits from fet$ for the same tablespace. But
    it also relieves Oracle of a lot of recursive sql elbow grease
    to figure out what order to drop things in the cascade. If you
    whack the constraints, then the indexes, and then the tables,
    that leaves a lot less for Oracle to figure out. I wouldn't be
    surprised if it about cut that in half again. (but that wasn't
    a guess Alex
    - I just said I wouldn't be surprised.) If you try it, please
    let us know whether I'm right.

    mwf

    -----Original Message-----
    From: oracle-l-bounce_at_freelists.org

    On Behalf Of steve montgomerie
    Sent: Thursday, November 19, 2009 5:25 PM
    To: JDunn_at_sefas.com
    Cc: oracle-l@freelists.org
    Subject: Re: Quick way to drop all objects in a schema

    We do a little of both where we drop tables and views through
    a cursor and then drop user cascade. I've found this to be
    about twice as fast as a simple drop user.

    It's peoplesoft so we have about 23,000 tables, 25,000
    indexes. Takes about 45 mins for us.

    Steve
    --
    http://www.freelists.org/webpage/oracle-l

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
categoriesoracle
postedNov 19, '09 at 9:25a
activeNov 20, '09 at 12:47a
posts11
users11
websiteoracle.com

People

Translate

site design / logo © 2022 Grokbase