[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120125151940.GC3901@csn.ul.ie>
Date: Wed, 25 Jan 2012 15:19:40 +0000
From: Mel Gorman <mel@....ul.ie>
To: Rik van Riel <riel@...hat.com>
Cc: linux-mm@...ck.org, lkml <linux-kernel@...r.kernel.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Minchan Kim <minchan.kim@...il.com>,
KOSAKI Motohiro <kosaki.motohiro@...il.com>
Subject: Re: [PATCH v2 -mm 2/3] mm: kswapd carefully call compaction
On Tue, Jan 24, 2012 at 01:22:43PM -0500, Rik van Riel wrote:
> With CONFIG_COMPACTION enabled, kswapd does not try to free
> contiguous free pages, even when it is woken for a higher order
> request.
>
> This could be bad for eg. jumbo frame network allocations, which
> are done from interrupt context and cannot compact memory themselves.
> Higher than before allocation failure rates in the network receive
> path have been observed in kernels with compaction enabled.
>
> Teach kswapd to defragment the memory zones in a node, but only
> if required and compaction is not deferred in a zone.
>
We used to do something vaguely like this in the past and it was
reverted because compaction was stalling for too long. With the
merging of sync-light, this should be less of an issue but we should
be watchful of high CPU usage from kswapd with too much time spent
in memory compaction even though I recognise that compaction takes
places in kswapds exit path. In 3.3-rc1, there is a risk of high
CPU usage anyway because kswapd may be scanning over large numbers
of dirty pages it is no longer writing so care will be needed to
disguish between different high CPU usage problems.
That said, I didn't spot any obvious problems so;
Acked-by: Mel Gorman <mel@....ul.ie>
Thanks.
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists