When folks mention how many tables and indexes their apps have, I get a
kind of perverse pleasure in topping them with what we deal with. Our
Student Information System produces a complete set of the tables it uses
for each School AND for each SchoolYear! With about 186 tables in each
base set, about 189 Schools, and 9 SchoolYears of data, our Prod database
has 187,800 tables and 295,713 indexes.
That App also uses no RI constraints, just unique indexes. We generate
export, drop, etc. parameter files and lists with SQL scripts that write
SQL scripts. So, to drop and replace a SchoolYear set of tables in our
Dev database with the set from Prod, we generate a script with about
25,000 - 30,000 Drop Table statements for Dev and an export *.par file for
Prod with about the same number of tables in the Tables= list. It's all
scripted and cron'd so painless for us, though the DBs work very hard for
a long time.
Fortunately, we're getting a new SIS next year - with only one set of
tables - so we probably won't break the 200,000 table count.
Jack C. Applewhite - Database Administrator
Austin I.S.D. - MIS Department
512.414.9715 (wk) / 512.935.5929 (pager)
From:
"Go-Faster Consultancy Ltd."
To:
Cc:
Date:
11/19/2009 05:20 PM
Subject:
RE: Quick way to drop all objects in a schema
Sent by:
oracle-l-bounce_at_freelists.org
There are no referential integrity constraints in PeopleSoft.
There no primary key constraints in PeopleSoft - they create unique
indexes
instead.
Every character and numeric column and most date column are not nullable -
so lots of NOT NULL constraints.
If you use Unicode on application v8, character columns are still defined
with byte semantics, so PeopleSoft puts a length checking constraint on
each
and every character column - as well as causing a huge parse overhead,
this
will make drop table commands very slow. Only from Application v9 do they
use character semantics.
Most of the tens of thousands of tables and indexes come from the multiple
'non-shared' tables created for each PeopleSoft temporary record (these
are
permanent tables used for temporary working storage during batch programs
-
different concurrent instances of the same program use different tables).
If you take the time and trouble to set the number of temporary table
instances on an Application Engine to the actual number of concurrent
instances of the program that you will run, you can significantly reduce
the
number of tables that you have to deal with. However, almost nobody does
this, because it is a lot of work.
regards
David Kurtz
Go-Faster Consultancy Ltd.
tel: +44 (0)7771 760660
fax: +44 (0)7092 348865
mailto:david.kurtz_at_go-faster.co.uk
web: www.go-faster.co.uk
Book: PeopleSoft for the Oracle DBA:
http://www.psftdba.comDBA Blogs: PeopleSoft:
http://blog.psftdba.com, Oracle:
http://blog.go-faster.co.ukPeopleSoft DBA Forum:
http://groups.yahoo.com/group/psftdba-----Original Message-----
From: oracle-l-bounce_at_freelists.org
On Behalf Of Mark W. Farnham
Sent: Thursday, November 19, 2009 10:42 PM
To: stmontgo_at_gmail.com; JDunn_at_sefas.com
Cc: oracle-l@freelists.org
Subject: RE: Quick way to drop all objects in a schema
Importance: High
I'd wager somewhere between a nickel and a donut that you can
additionally speed it up if you drop the constraints first,
then the indexes, then the tables, and then, as you say, drop
user cascade for completeness. Now while I have not run a
speed test since before we had local extent management and the
I doubt the difference is still as much it was with dictionary
management. It probably only matters with local extent
management when you have huge numbers of tables and indexes. A
big part of the old advantage (meaning dictionary managed) was
if you could take the tablespaces off line for the drop,
avoiding the "pecimal" (my made up antonym for optimal)
algorithm to juggle putting bits back on fet$ when others
could be grabbing bits from fet$ for the same tablespace. But
it also relieves Oracle of a lot of recursive sql elbow grease
to figure out what order to drop things in the cascade. If you
whack the constraints, then the indexes, and then the tables,
that leaves a lot less for Oracle to figure out. I wouldn't be
surprised if it about cut that in half again. (but that wasn't
a guess Alex
- I just said I wouldn't be surprised.) If you try it, please
let us know whether I'm right.
mwf
-----Original Message-----
From: oracle-l-bounce_at_freelists.org
On Behalf Of steve montgomerie
Sent: Thursday, November 19, 2009 5:25 PM
To: JDunn_at_sefas.com
Cc: oracle-l@freelists.org
Subject: Re: Quick way to drop all objects in a schema
We do a little of both where we drop tables and views through
a cursor and then drop user cascade. I've found this to be
about twice as fast as a simple drop user.
It's peoplesoft so we have about 23,000 tables, 25,000
indexes. Takes about 45 mins for us.
Steve
--
http://www.freelists.org/webpage/oracle-l