On 8/13/13 8:09 PM, Robert Haas wrote:
is removed, the segment automatically goes away (we could allow for
server-lifespan segments as well with only trivial changes, but I'm
not sure whether there are compelling use cases for that).
To clarify... you're talking something that would intentionally survive postmaster restart? I don't see use for that either...
postmaster startup. The other problem, of making sure that segments
get unmapped at the proper time, is solved using the resource owner
mechanism. There is an API to create a mapping which is
session-lifespan rather than resource-owner lifespan, but the default
is resource-owner lifespan, which I suspect will be right for common
uses. Thus, there are four separate occasions on which we remove
shared memory segments: (1) resource owner cleanup, (2) backend exit
(for any session-lifespan mappings and anything else that slips
through the cracks), (3) postmaster exit (in case a child dies without
cleaning itself up), and (4) postmaster startup (in case the
postmaster dies without cleaning up).
Ignorant question... is ResourceOwner related to memory contexts? If not, would memory contexts be a better way to handle memory segment cleanup?
There are quite a few problems that this patch does not solve. First,
It also doesn't provide any mechanism for notifying backends of a new segment. Arguably that's beyond the scope of dsm.c, but ISTM that it'd be useful to have a standard method or three of doing that; perhaps just some convenience functions wrapping the methods mentioned in comments.
Finally, I'd like to thank Noah Misch for a lot of discussion and
thought on that enabled me to make this patch much better than it
otherwise would have been. Although I didn't adopt Noah's preferred
solutions to all of the problems, and although there are probably
still some problems buried here, there would have been more if not for
his advice. I'd also like to thank the entire database server team at
EnterpriseDB for allowing me to dump large piles of work on them so
that I could work on this, and my boss, Tom Kincaid, for not allowing
other people to dump large piles of work on me.
Thanks to you and the rest of the folks at EnterpriseDB... dynamic shared memory is something we've needed forever! :)

Other comments...

+ * If the state file is empty or the contents are garbled, it probably means
+ * that the operating system rebooted before the data written by the previous
+ * postmaster made it to disk. In that case, we can just ignore it; any shared
+ * memory from before the reboot should be gone anyway.

I'm a bit concerned about this; I know it was possible in older versions for the global shared memory context to be left behind after a crash and needing to clean it up by hand. Dynamic shared mem potentially multiplies that by 100 or more. I think it'd be worth changing dsm_write_state_file so it always writes a new file and then does an atomic mv (or something similar).

+ * If some other backend exited uncleanly, it might have corrupted the
+ * control segment while it was dying. In that case, we warn and ignore
+ * the contents of the control segment. This may end up leaving behind
+ * stray shared memory segments, but there's not much we can do about
+ * that if the metadata is gone.

Similar concern... in this case, would it be possible to always write updates to an un-used slot and then atomically update a pointer? This would be more work than what I suggested above, so maybe just a TODO for now...

Though... is there anything a dying backend could do that would corrupt the control segment to the point that it would screw up segments allocated by other backends and not related to the dead backend? Like marking a slot as not used when it is still in use and isn't associated with the dead backend? (I'm assuming that if a backend dies unexpectedly then all other backends using memory shared with that backend will need to handle themselves accordingly so that we don't need to worry about that in dsm.c.)


I was able to simplify dsm_create a bit (depending on your definition of simplify...) not sure if the community is OK with using an ereport to exit a loop (that could safely go outside the loop though...). In any case, I traded 5 lines of (mostly) duplicate code with an if{} and a break:

+ nitems = dsm_control->nitems;
+ for (i = 0; i <= nitems; ++i) /* Intentionally go one slot past what's currently been allocated */
+ {
+ if (dsm_control->item[i].refcnt == 0)
+ {
+ dsm_control->item[i].handle = seg->handle;
+ /* refcnt of 1 triggers destruction, so start at 2 */
+ dsm_control->item[i].refcnt = 2;
+ seg->control_slot = i;
+ if (i = nitems) /* We hit the end of the list */
+ {
+ /* Verify that we can support an additional mapping. */
+ if (nitems >= dsm_control->maxitems)
+ ereport(ERROR,
+ (errcode(ERRCODE_INSUFFICIENT_RESOURCES),
+ errmsg("too many dynamic shared memory segments")));
+
+ dsm_control->nitems++;
+ }
+ break;
+ }
+ }
+
+ LWLockRelease(DynamicSharedMemoryControlLock);
+ return seg;



Should this (in dsm_attach)

+ * If you're hitting this error, you probably want to use attempt to

be

+ * If you're hitting this error, you probably want to attempt to

?


Should dsm_impl_op sanity check the arguments after op? I didn't notice checks in the type-specific code but I also didn't read all of it... are we just depending on the OS to sanity-check?

Also, does the GUC code enforce that the GUC must always be something that's supported? If not then the error in dsm_impl_op should be more user-friendly.

I basically stopped reading after dsm_impl_op... the rest of the stuff was rather over my head.
--
Jim C. Nasby, Data Architect jim@nasby.net
512.569.9461 (cell) http://jim.nasby.net

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 4 of 8 | next ›
Discussion Overview
grouppgsql-hackers @
categoriespostgresql
postedAug 14, '13 at 1:09a
activeAug 31, '13 at 12:27p
posts8
users4
websitepostgresql.org...
irc#postgresql

People

Translate

site design / logo © 2021 Grokbase