[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1280932711-23696-1-git-send-email-mel@csn.ul.ie>
Date: Wed, 4 Aug 2010 15:38:29 +0100
From: Mel Gorman <mel@....ul.ie>
To: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org
Cc: Wu Fengguang <fengguang.wu@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
Nick Piggin <npiggin@...e.de>, Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Christoph Hellwig <hch@...radead.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Mel Gorman <mel@....ul.ie>
Subject: [RFC PATCH 0/2] Prioritise inodes and zones for writeback required by page reclaim
Commenting on the series "Reduce writeback from page reclaim context V6"
Andrew Morton noted;
direct-reclaim wants to write a dirty page because that page is in the
zone which the caller wants to allocate from! Telling the flusher threads
to perform generic writeback will sometimes cause them to just gum the
disk up with pages from different zones, making it even harder/slower to
allocate a page from the zones we're interested in, no?
On the machines used to test the series, there were relatively few zones
and only one BDI so the scenario describes is a possibility. This series is
a very early prototype series aimed at mitigating the problem.
Patch 1 adds wakeup_flusher_threads_pages() which takes a list of pages
from page reclaim. Each inode belonging to a page on the list is marked
I_DIRTY_RECLAIM. When the flusher thread wakes, inodes with this tag are
unconditionally moved to the wb->b_io list for writing.
Patch 2 notes that writing back inodes does not necessarily write back
pages belonging to the zone page reclaim is concerned with. In response, it
adds a zone and counter to wb_writeback_work. As pages from the target zone
are written, the zone-specific counter is updated. When the flusher thread
then checks the zone counters if a specific zone is being targeted. While
more pages may be written than necessary, the assumption is that the pages
need cleaning eventually, the inode must be relatively old to have pages at
the end of the LRU, the IO will be relatively efficient due to less random
seeks and that pages from the target zone will still be cleaned.
Testing did not show any significant differences in terms of reducing dirty
file pages being written back but the lack of multiple BDIs and NUMA nodes in
the test rig is a problem. Maybe someone else has access to a more suitable
test rig.
Any comment as to the suitability for such a direction?
fs/fs-writeback.c | 83 +++++++++++++++++++++++++++++++++++++++++---
include/linux/fs.h | 5 ++-
include/linux/writeback.h | 5 +++
mm/page-writeback.c | 12 ++++++-
mm/vmscan.c | 11 ++++--
5 files changed, 103 insertions(+), 13 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists