--I think you've answered at least 1/2 of my question,
Andrew.

--I'd like to figure out if Postgres reaches a point where
it will no longer index or vacuum a table based on its size (your answer
tells me 'No' - it will continue until it is done, splitting each
table on 1Gig increments).

--And if THAT is true, then why am I getting failures when
I'm vacuuming or indexing a table just after reaching 2 Gig?

--And if it's an OS (or any other) problem, how can I factor
out Postgres?

--Thanks!

-X


[snip]

Has anyone seen if it is a problem with the OS or with the way
Postgres handles large files (or, if I should compile it again
with some new options).

What do you mean "postgres handles large files"? The filesize
problem isn't related to the size of your table, because postgres
splits files at 1 Gig.
If it's an output problem, you could see something, but you said you
were vacuuming.


A

[snip]

Search Discussions

Discussion Posts

Previous

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 2 of 2 | next ›
Discussion Overview
grouppgsql-general @
categoriespostgresql
postedMar 25, '02 at 8:51p
activeMar 25, '02 at 9:14p
posts2
users1
websitepostgresql.org
irc#postgresql

1 user in discussion

Johnson, Shaunn: 2 posts

People

Translate

site design / logo © 2022 Grokbase