[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200704102300.29411.rjw@sisk.pl>
Date: Tue, 10 Apr 2007 23:00:28 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Pavel Machek <pavel@....cz>
Cc: LKML <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Nigel Cunningham <ncunningham@...uxmail.org>
Subject: Re: [RFC][PATCH -mm] swsusp: Use rbtree for tracking allocated swap
On Monday, 9 April 2007 14:39, Pavel Machek wrote:
> Hi!
>
> > > > Some time ago we discussed the possibility of simplifying the swsusp's approach
> > > > towards tracking the swap pages allocated by it for saving the image (so that
> > > > they can be freed if there's an error).
> > > >
> > > > I think we can get back to it now, as it is a nice optimization that should
> > > > allow us to use less memory (almost always) and improve performance a bit.
> > > >
> > >
> > > Well, I do not think you can measure the difference, but...
> >
> > As far as the memory usage is concerned, I can. :-) Usually, it takes 1 extent
> > (40 B on x86_64) to register all of the allocated swap pages. If bitmaps are
> > used, we need as many bits as there are swap pages available (for 1 GB swap
> > and 4 KB pages that would be ~250000 bits, which gives ~8 pages, and we can
> > save more than 800 extents using that much memory).
>
> Well... obviously it works for the best case. OTOH, for the worst, it
> needs 40bytes for every 2 bits. That's 16000% worse. And for that
> nightmare-fragmented 1GB swap, you'll need 5000000bytes... which is
> pretty bad.
>
> OTOH 5MB RAM per 1GB swap is not _too_ bad... so we can do it...
In real-life scenarios you always need to keep free swap space enough for
suspending all the time, which IMO effectively prevents the "totally fragmented
1 GB swap" situation from happening.
Greetings,
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists