[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110928180907.GD1696@barrios-desktop>
Date: Thu, 29 Sep 2011 03:09:07 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: Johannes Weiner <jweiner@...hat.com>
Cc: Andrew Morton <akpm@...gle.com>, Mel Gorman <mgorman@...e.de>,
Christoph Hellwig <hch@...radead.org>,
Dave Chinner <david@...morbit.com>,
Wu Fengguang <fengguang.wu@...el.com>, Jan Kara <jack@...e.cz>,
Rik van Riel <riel@...hat.com>,
Chris Mason <chris.mason@...cle.com>,
Theodore Ts'o <tytso@....edu>,
Andreas Dilger <adilger.kernel@...ger.ca>, xfs@....sgi.com,
linux-btrfs@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [patch 2/2/4] mm: try to distribute dirty pages fairly across
zones
On Wed, Sep 28, 2011 at 09:11:54AM +0200, Johannes Weiner wrote:
> On Wed, Sep 28, 2011 at 02:56:40PM +0900, Minchan Kim wrote:
> > On Fri, Sep 23, 2011 at 04:42:48PM +0200, Johannes Weiner wrote:
> > > The maximum number of dirty pages that exist in the system at any time
> > > is determined by a number of pages considered dirtyable and a
> > > user-configured percentage of those, or an absolute number in bytes.
> >
> > It's explanation of old approach.
>
> What do you mean? This does not change with this patch. We still
> have a number of dirtyable pages and a limit that is applied
> relatively to this number.
>
> > > This number of dirtyable pages is the sum of memory provided by all
> > > the zones in the system minus their lowmem reserves and high
> > > watermarks, so that the system can retain a healthy number of free
> > > pages without having to reclaim dirty pages.
> >
> > It's a explanation of new approach.
>
> Same here, this aspect is also not changed with this patch!
>
> > > But there is a flaw in that we have a zoned page allocator which does
> > > not care about the global state but rather the state of individual
> > > memory zones. And right now there is nothing that prevents one zone
> > > from filling up with dirty pages while other zones are spared, which
> > > frequently leads to situations where kswapd, in order to restore the
> > > watermark of free pages, does indeed have to write pages from that
> > > zone's LRU list. This can interfere so badly with IO from the flusher
> > > threads that major filesystems (btrfs, xfs, ext4) mostly ignore write
> > > requests from reclaim already, taking away the VM's only possibility
> > > to keep such a zone balanced, aside from hoping the flushers will soon
> > > clean pages from that zone.
> >
> > It's a explanation of old approach, again!
> > Shoudn't we move above phrase of new approach into below?
>
> Everything above describes the current behaviour (at the point of this
> patch, so respecting lowmem_reserve e.g. is part of the current
> behaviour by now) and its problems. And below follows a description
> of how the patch tries to fix it.
It seems that it's not a good choice to use "old" and "new" terms.
Hannes, please ignore, it's not a biggie.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists