[PostgreSQL-Hackers] Vacuum and Transactions
Oct 5, 2005 at 12:19 pm
Rod Taylor wrote:
I have maintenace_work_mem set to about 1GB in size.
Isn't a bit too much ?
: periodically If we started the vacuum with the indexes, remembered a lowest xid per index, we could then vacuum the heap up to the lowest of those xids, no ? We could then also vacuum each index separately. Andreas
: Understood. I've seen them, but until they're well tested in the newest version I won't be using them in a production environment. I do appreciate the goal and look forward to this concept being applied or a method of splitting up the work vacuum needs to do, in the future. --
: Accomplishing the pg_listener cleanup often enough can be difficult in some circumstances. I have other items in this database with high churn as well. Slony was just an example. --
: Yeah, I want to do some more testing of that; it should be easy to improve the "abuse" of pg_listener a whole lot. That eliminates the ability to utilize transactions on things that ought to be updated in a single transaction... -- output = ("cbbrowne" "@" "ntlug.org") http://cbbrowne.com/info/lsf.html MS-Windows: Proof that P.T. Barnum was correct.
Vacuum threshold and non-serializable read-only transaction
Can autovac try to lock multiple tables at once?
"recovering prepared transaction" after server restart message
Question about (lazy) vacuum
Vacuum dead tuples that are "between" transactions
stats for failed transactions (was Re: [GENERAL] VACUUM Question)
How to make lazy VACUUM of one table run in several transactions ?
Data loss, vacuum, transaction wrap-around
Is there a way to make VACUUM run completely outside transaction
Is there a problem running vacuum in the middle of a transaction?
9 of 12
Oct 4, '05 at 4:27a
Oct 6, '05 at 2:04a
7 users in discussion
Rod Taylor (4)
Hannu Krosing (2)
Tom Lane (2)
Gaetano Mendola (1)
Andreas Zeugswetter (1)
Simon Riggs (1)
Chris Browne (1)
Groups & Organizations
site design / logo © 2022 Grokbase