[Date Prev][Date Next][Thread Prev][Thread Next][Author Index][Date Index][Thread Index]

Re: Disk Forwarding Hairball

> Problem: We discovered and solved a nasty hairball that comes from
> the combination of: 1) allowing flocks to migrate and leave forwarders
> behind, and 2) insisting that stubs be canonical.  A stub for the
> original location and a stub for the new location of a migrated object
> can simultaneously exist in memory, and when they get faulted in, the
> C++ become operation can't connect all the pointers.  

You weren't aware of this already?  I thought we'd been over this when
we were talking about canonical stubs.  (Hmmm...  Maybe I assumed it
was obvious.)

> Solution: When creating a ShepherdStub (at the edge of a flock), first
> check whether there are any already existing shepherd/stubs have the
> same hash value.  If so, fault them in.  If the resulting object is
> actually the object that the stub would point to, then just return a
> pointer to it.  Note that this algorithm actually needs to be more
> symetrical because both stubs might point at forwarders.

Actually, you don't want to fault them in.  That would bring in their
flocks, which (in addition to the recursion problem) may unfold to
something large, or require extensive computation.  You only need
to chase down their true locations to see if they are the same object.
(If shepherds are stored redundantly for efficiency, you may need to
perform additional tests on apparent misses.  That is not currently a
problem, but we must remember it when designing future upgrades.)

> Since all shepherds use sequence numbers for their hashes, the actual
> incidence of true hash collisions between different objects will be
> very low.  We typically only drag things in from disk when they are
> actually pointers to the same object.  Even then, we can stop dragging
> in flocks as soon as we know that the two stubs would point to the
> same object.

> This can cause an arbitrary number of reads (bounded by the length of
> forwarder chains), but URDI doesn't get overwhelmed by too many reads.

Tracing a forwarder chain should be a rare event, and forwarder chains
should be short, because of an additional property of forwarders:  Every
time you:

 - Hold the write view (and in the single-thread implementation you
   always do) on a backend structure mounted read/write, and

 - Follow a forwarder.

you have enough information to change the off-snarf pointer to point
to the actual object (or as much farther up the forwarder chain as
you went).  Therefore you make that change, and (at least one link
of) that forwarding chain is never followed again.  Also, though
intermediate links may still be required (because they have additional
clients on unknown snarfs), they too can be shorted to point (directly
at or closer to) their ultimate target.

Shorting a forwarder never >requires< changes to more than two snarfs
to move between consistent states (although changing three is the
efficient form of the common case, and changing all the snarfs on
a chain may occasionally be desirable, to free the forwarders when
their last client is informed of the new location).

Backend structures on read-only media (such as optical disks) will
typically have had the forwarders optimized away (though the code
should handle them, so read/write media can be mounted read-only).