[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20090427080213.GB2244@cmpxchg.org>
Date: Mon, 27 Apr 2009 10:02:13 +0200
From: Johannes Weiner <hannes@...xchg.org>
To: Hugh Dickins <hugh@...itas.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Rik van Riel <riel@...hat.com>
Subject: Re: [patch 2/3][rfc] swap: try to reuse freed slots in the allocation area
On Wed, Apr 22, 2009 at 08:59:06PM +0100, Hugh Dickins wrote:
> On Mon, 20 Apr 2009, Johannes Weiner wrote:
>
> > A swap slot for an anonymous memory page might get freed again just
> > after allocating it when further steps in the eviction process fail.
> >
> > But the clustered slot allocation will go ahead allocating after this
> > now unused slot, leaving a hole at this position. Holes waste space
> > and act as a boundary for optimistic swap-in.
> >
> > To avoid this, check if the next page to be swapped out can sensibly
> > be placed at this just freed position. And if so, point the next
> > cluster offset to it.
> >
> > The acceptable 'look-back' distance is the number of slots swap-in
> > clustering uses as well so that the latter continues to get related
> > context when reading surrounding swap slots optimistically.
> >
> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
> > Cc: Hugh Dickins <hugh@...itas.com>
> > Cc: Rik van Riel <riel@...hat.com>
>
> I'm glad you're looking into this area, thank you.
> I've a feeling that you're going to come up with something good
> here, but that neither of these patches (2/3 and 3/3) is yet it.
>
> This patch looks plausible, but I'm not persuaded by it.
>
> I wonder what contribution it made to the impressive figures in
> your testing - I suspect none, that it barely exercised this path.
>
> I worry that by jumping back to use the slot in this way, you're
> actually propagating the glitch: by which I mean, if the pages are
> all as nicely linear as you're supposing, then now one of them
> will get placed out of sequence, unlike with the existing code.
>
> And note that swapin's page_cluster is used in a strictly aligned
> way (unlike swap allocation's SWAPFILE_CLUSTER): if you're going
> to use page_cluster to bound this, then perhaps you should be
> aligning too. Perhaps, perhaps not.
Thank you, will think about that.
> If this patch is worthwhile, then don't you want also to be
> removing the " && vm_swap_full()" test from vmscan.c, where
> shrink_page_list() activate_locked does try_to_free_swap(page)?
I fear this swap releasing there can fail quite easily anyway. At
least that is what my testing patches suggest - we hit quite a lot of
already swap cached pages in shrink_page_list() and I think that is
where they come from. It's a different issue, though.
> But bigger And/Or: you remark that "holes act as a boundary for
> optimistic swap-in". Maybe that's more worth attacking? I think
> that behaviour is dictated purely by the convenience of a simple
> offset:length interface between swapfile.c's valid_swaphandles()
> and swap_state.c's swapin_readahead().
>
> If swapin readahead is a good thing (I tend to be pessimistic about
> it: think it's worth reading several pages while the disk head is
> there, but hold no great hopes that the other pages will be useful -
> though when I've experimented with removing, it's certainly proved
> to be of some value), then I think you'd do better to restructure
> that interface, so as not to stop at the holes.
Hm, let's try that. I am thinking of extending valid_swap_handles()
to return exactly those through a bitmap that can represent holes.
I think the read-in makes sense but not when the system is already
thrashing. Then it will just use memory for data we are not sure of
being needed at all. Perhaps it should be throttled or disabled at
some point.
Hugh, thanks a lot for your great feedback.
Hannes
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists