Hi Mark.

Thanks for all the info and questions.

Our thoughts on doing parallel 4 for the indexes was primarily so that
during index creation some of the very large tables (several over 1GB and
several million rows) might be quicker with a slight parallelism. We've
broken out this one import into, I think at this point, about 15 separate
index creates. Breaking it up further will be harder and most of them
finish around the same time. We've already broken up the sections that took
significantly longer. If any table has more than one index, all of those
indexes are created in the same thread.

When we saw the "name-service call wait" showing up we found the 2004 email
about it but it didn't seem to help us much. We don't have an oracm.log
file on the system. It must've been replaced with all the Clusterware
changes in the 11.1 and especially the 11.2 changes.

I did look at all the .log files under the Grid Infrastructure installation
(/grid/log) that had -mtime -2 and didn't see anything in particular going
on. I could've just not known for what I should've been looking.

The Disk Group for this database is not shared with any other databases.

We decided to try this on an 11.2 database on the same hardware (and same
diskgroup) and during the index creation we saw the same "name-service call
wait" showing up just like it did with 11.1 database.

We're considering asking Oracle what this means since they don't seem to
have any useful information on MOS. (Speaking of MOS, they've swapped
things around from right to left when you're searching for stuff. I think
they've fixed ALL problems with MOS with this page format change. It is now

We don't ever see this wait except during this time. If we ask Oracle and
get an answer I'll let you know what they say.

On Thu, Apr 1, 2010 at 12:44 AM, Mark W. Farnham wrote:

Since you have plenty parallel job threads (13), why are you also running
parallel 4 on the individual actions? In general parallel options on
individual commands incur some overhead in order to put more of a machine to
work on an individual command. That is a good trade off when it is a
priority to get some individual task to complete in as little elapsed time
as possible. If you cut back to parallel 1 and you have available headroom
on i/o capacity and/or cpu cycles, run more job threads.

Setting up your threads so that all the indexes on a given table or
partition are created at the same time will take advantage of any possible
caching at any layer of the read complex. For index creation table blocks
may not go into the buffer cache, but getting the underlying blocks into the
cache of your storage array from one index creation **may** help the

Setting up your threads so that some threads are all the small stuff will
minimize the chances of more threads having the sort phase of index
construction spill to disk.

As for name-service call wait, there is a thread in the archives about that
from K Gopalakrishnan. Dec. 7, 2004 if I recall correctly.

I thought that was a RAC instance startup problem, but you state you�re
non-RAC. And that was 10g, not 11g.

Ah. You�re using clusterware and the ASM is RAC, right?

Someone may be able to point you to the exact reference on this wait
(KGogal being a likely suspect), but I would look in your clusterware logs
on the theory that the wait is coming from clustered ASM access. Are the
other nodes also pounding on ASM? The same diskgroups?

I�m speculating a bit here. Quite possibly you�re just beating on ASM too
hard. Up to the diminishing return limit on getting additional indexes to
avoid sort spilling to disk you might look into whether an increase to the
pga aggregate size would help reduce total i/o.

Good luck,


*From:* oracle-l-bounce_at_freelists.org [mailto:
oracle-l-bounce@freelists.org] *On Behalf Of *Scott Sibert
*Sent:* Wednesday, March 31, 2010 4:58 PM
*To:* Oracle-L Freelists
*Subject:* name-service call wait

We've tried looking around but we've had trouble finding any information on
this. It doesn't show up in the Reference guide either.

We're doing lots of index creations in non-RAC 64-bit on RHEL5.4.
By lots I mean we have 478,587 indexes. We're trying to export from a and import into an, doing lots of creative testing to
manually parallelize table creation, data importing, and index creation.

We're doing lots of creative manual parallelizing but today when we looked
in Grid Control for the database we saw lots of "other" waits during the
index creation. Clicking on that we see that "name-service call wait" is
most of "other." ADDM complains about large number of unusual "other" wait
events. ASH shows the "name-service call wait" as 25% of all waits during
the three-hour period.

The box is a Dell PE 2900iii dual quad-core X5470 at 3.33GHz with 48GB
memory plugged into an SMC DMX array. (Meaning it should be fast enough.)
This host is a member of a 4-node 11.2 clusterware cluster and this 11.1
database does use ASM.

We're not sure what this "name-service call wait" really means.

A little about the index creation: we've broken up the DDL (extracted from
the export) into 13 commands to create indexes that match a certain wildcard
criteria. We cover all the indexes this way and have them fairly evenly
distributed according to size of indexes and how long each set will run.
Each index create statement has PARALLEL 4 in the command to help with
index creation. I know we could have up to 52 threads creating indexes on
an 8-core box but it does pretty well. (It is likely that when we go live
it will be on a couple newer--not-yet-ordered--R900's with dual quad-core
E7440 at 2.4Ghz, 128GB memory and RAC.) And even when most of them finish
and there are only one or two creates being run we still have this
"name-service call wait" showing up.


Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouporacle-l @
postedApr 13, '10 at 2:55p
activeApr 13, '10 at 2:55p

1 user in discussion

Scott Sibert: 1 post



site design / logo © 2021 Grokbase