[Crash-utility] handling missing kdump pages in diskdump format

Bob Montgomery bob.montgomery at hp.com
Wed Mar 14 18:11:28 UTC 2007


On Wed, 2007-03-14 at 08:34 -0500, Dave Anderson wrote:
> Vivek Goyal wrote:
> 
> >
> > > =====================================
> > > Can ELF Dumpfiles Solve This Problem?
> > > =====================================
> > >
> > > To achieve correctness with ELF dumpfiles, one could perhaps remap the
> > > four types of pages to the three types of ELF representations so that
> > > "A) Not In The Address Space" and "B) Excluded Type" were both mapped
> > > to "1) Not In The Address Space".  Then "C) Zero Content" would map
> > > to "2) Not In The File, Zero Fill".  You would lose the ability to
> > > know if a page were missing because it was never in the address space
> > > in the first place, or because it was excluded because of its type.
> > > But if you read a zero, you'd know it really was a zero.
> > >
> >
> > I think this is the way to go. Why would I like to know if a page was
> > never present or mkdumpfile filtered it out? I think we can live with that
> > and lets just not create any sort of mapping for excluded pages in finally
> > generated ELF headers.
> >
> 
> If "Excluded Type" pages were mapped as "Not in The Address Space",
> aren't you going to end up with an absurd number of small PT_LOAD
> segments?  I believe the original intent was to avoid that.
> 
> Dave

This is already covered by the algorithm in the ELF generator in
makedumpfile now, except that the current code lumps the excluded pages
with the zero pages while it scans through the PT_LOAD segments of the
original vmcore.  Large contiguous groups of excluded pages would be
identified and removed from the map (probably by turning one PT_LOAD
into two: one on either side of the excluded zone).  Smaller isolated
groups of excluded pages that don't meet the group size threshold (worst
case like my example of excluding every odd page in a large zone of
memory) would be ignored by the removal code and simply left in their
existing segments.  The threshold value (currently 256 pages) is
adjusted to balance the reduction of the size of the file against the
growth in number of PT_LOAD segments.  It doesn't really hurt anything
other than the size of the dump to leave the isolated pages in the dump.
Just like in the example I showed, you can sometimes read a page that
should not have still been there in an ELF dumpfile.

That's why I suggested that the existing makedumpfile code might have to
do two passes to make an ELF dumpfile:  One to generate the new longer
list of PT_LOAD segments with the big excluded zones removed, and a
second to see if any large zero areas would allow further changes to the
PT_LOAD segments to put the zero fill areas at the ends and remove the
zero page data images from the dumpfile.


Bob Montgomery
> 




More information about the Crash-utility mailing list