I'm just wondering if anyone has any insights on how to speed this up.
Our fact tables consist of the primary keys of the involved dimension tables
plus numeric count columns.
To build our fact table, let's call it Z (as I understand the process) we
grab the primary key of the row in dimension A plus information which
we can use to find the associated row in dimension B. We go to dimension B,
grab the primary key plus information which allows us to
find the associated row in dimension C, and so on through 8 dimension
This gives us one row in the fact table.
This worked OK for our ETL developers when we were dealing with 10,000 rows
in the development database.
Now we're working with a much larger source set and we're talking
100,000,000 rows. It doesn't work that well. Basically, each row in the
fact table requires full a index scan and a rowid fetch from each of the
Does anybody have experience or even a theoretical insight into a better way
to do this?