[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100726163011.GA23467@barrios-desktop>
Date: Tue, 27 Jul 2010 01:30:11 +0900
From: Minchan Kim <minchan.kim@...il.com>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Mel Gorman <mel@....ul.ie>,
Christoph Hellwig <hch@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
Nick Piggin <npiggin@...e.de>, Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [PATCH 7/8] writeback: sync old inodes first in background
writeback
On Mon, Jul 26, 2010 at 12:37:09PM +0800, Wu Fengguang wrote:
> On Mon, Jul 26, 2010 at 12:11:59PM +0800, Minchan Kim wrote:
> > On Mon, Jul 26, 2010 at 12:27 PM, Wu Fengguang <fengguang.wu@...el.com> wrote:
> > > On Sun, Jul 25, 2010 at 08:03:45PM +0800, Minchan Kim wrote:
> > >> On Sun, Jul 25, 2010 at 07:43:20PM +0900, KOSAKI Motohiro wrote:
> > >> > Hi
> > >> >
> > >> > sorry for the delay.
> > >> >
> > >> > > Will you be picking it up or should I? The changelog should be more or less
> > >> > > the same as yours and consider it
> > >> > >
> > >> > > Signed-off-by: Mel Gorman <mel@....ul.ie>
> > >> > >
> > >> > > It'd be nice if the original tester is still knocking around and willing
> > >> > > to confirm the patch resolves his/her problem. I am running this patch on
> > >> > > my desktop at the moment and it does feel a little smoother but it might be
> > >> > > my imagination. I had trouble with odd stalls that I never pinned down and
> > >> > > was attributing to the machine being commonly heavily loaded but I haven't
> > >> > > noticed them today.
> > >> > >
> > >> > > It also needs an Acked-by or Reviewed-by from Kosaki Motohiro as it alters
> > >> > > logic he introduced in commit [78dc583: vmscan: low order lumpy reclaim also
> > >> > > should use PAGEOUT_IO_SYNC]
> > >> >
> > >> > My reviewing doesn't found any bug. however I think original thread have too many guess
> > >> > and we need to know reproduce way and confirm it.
> > >> >
> > >> > At least, we need three confirms.
> > >> > o original issue is still there?
> > >> > o DEF_PRIORITY/3 is best value?
> > >>
> > >> I agree. Wu, how do you determine DEF_PRIORITY/3 of LRU?
> > >> I guess system has 512M and 22M writeback pages.
> > >> So you may determine it for skipping max 32M writeback pages.
> > >> Is right?
> > >
> > > For 512M mem, DEF_PRIORITY/3 means 32M dirty _or_ writeback pages.
> > > Because shrink_inactive_list() first calls
> > > shrink_page_list(PAGEOUT_IO_ASYNC) then optionally
> > > shrink_page_list(PAGEOUT_IO_SYNC), so dirty pages will first be
> > > converted to writeback pages and then optionally be waited on.
> > >
> > > The dirty/writeback pages may go up to 512M*20% = 100M. So 32M looks
> > > a reasonable value.
> >
> > Why do you think it's a reasonable value?
> > I mean why isn't it good 12.5% or 3.125%? Why do you select 6.25%?
> > I am not against you. Just out of curiosity and requires more explanation.
> > It might be thing _only I_ don't know. :(
>
> It's more or less random selected. I'm also OK with 3.125%. It's an
> threshold to turn on some _last resort_ mechanism, so don't need to be
> optimal..
Okay. Why I had a question is that I don't want to add new magic value in
VM without detailed comment.
While I review the source code, I always suffer form it. :(
Now we have a great tool called 'git'.
Please write down why we select that number detaily when we add new
magic value. :)
Thanks, Wu.
--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists