[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20100610231706.1d7528f2.akpm@linux-foundation.org>
Date: Thu, 10 Jun 2010 23:17:06 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Mel Gorman <mel@....ul.ie>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
Nick Piggin <npiggin@...e.de>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 6/6] vmscan: Do not writeback pages in direct reclaim
On Tue, 8 Jun 2010 10:02:25 +0100 Mel Gorman <mel@....ul.ie> wrote:
> When memory is under enough pressure, a process may enter direct
> reclaim to free pages in the same manner kswapd does. If a dirty page is
> encountered during the scan, this page is written to backing storage using
> mapping->writepage. This can result in very deep call stacks, particularly
> if the target storage or filesystem are complex. It has already been observed
> on XFS that the stack overflows but the problem is not XFS-specific.
>
> This patch prevents direct reclaim writing back pages by not setting
> may_writepage in scan_control. Instead, dirty pages are placed back on the
> LRU lists for either background writing by the BDI threads or kswapd. If
> in direct lumpy reclaim and dirty pages are encountered, the process will
> kick the background flushter threads before trying again.
>
This wouldn't have worked at all well back in the days when you could
dirty all memory with MAP_SHARED. The balance_dirty_pages() calls on
the fault path will now save us but if for some reason we were ever to
revert those, we'd need to revert this change too, I suspect.
As it stands, it would be wildly incautious to make a change like
this without first working out why we're pulling so many dirty pages
off the LRU tail, and fixing that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists