[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100723105719.GE5300@csn.ul.ie>
Date: Fri, 23 Jul 2010 11:57:19 +0100
From: Mel Gorman <mel@....ul.ie>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Christoph Hellwig <hch@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
Nick Piggin <npiggin@...e.de>, Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Minchan Kim <minchan.kim@...il.com>
Subject: Re: [PATCH 7/8] writeback: sync old inodes first in background
writeback
On Fri, Jul 23, 2010 at 05:45:15PM +0800, Wu Fengguang wrote:
> On Thu, Jul 22, 2010 at 06:48:23PM +0800, Mel Gorman wrote:
> > On Thu, Jul 22, 2010 at 05:21:55PM +0800, Wu Fengguang wrote:
> > > > I guess this new patch is more problem oriented and acceptable:
> > > >
> > > > --- linux-next.orig/mm/vmscan.c 2010-07-22 16:36:58.000000000 +0800
> > > > +++ linux-next/mm/vmscan.c 2010-07-22 16:39:57.000000000 +0800
> > > > @@ -1217,7 +1217,8 @@ static unsigned long shrink_inactive_lis
> > > > count_vm_events(PGDEACTIVATE, nr_active);
> > > >
> > > > nr_freed += shrink_page_list(&page_list, sc,
> > > > - PAGEOUT_IO_SYNC);
> > > > + priority < DEF_PRIORITY / 3 ?
> > > > + PAGEOUT_IO_SYNC : PAGEOUT_IO_ASYNC);
> > > > }
> > > >
> > > > nr_reclaimed += nr_freed;
> > >
> > > This one looks better:
> > > ---
> > > vmscan: raise the bar to PAGEOUT_IO_SYNC stalls
> > >
> > > Fix "system goes totally unresponsive with many dirty/writeback pages"
> > > problem:
> > >
> > > http://lkml.org/lkml/2010/4/4/86
> > >
> > > The root cause is, wait_on_page_writeback() is called too early in the
> > > direct reclaim path, which blocks many random/unrelated processes when
> > > some slow (USB stick) writeback is on the way.
> > >
> >
> > So, what's the bet if lumpy reclaim is a factor that it's
> > high-order-but-low-cost such as fork() that are getting caught by this since
> > [78dc583d: vmscan: low order lumpy reclaim also should use PAGEOUT_IO_SYNC]
> > was introduced?
>
> Sorry I'm a bit confused by your wording..
>
After reading the thread, I realised that fork() stalling could be a
factor. That commit allows lumpy reclaim and PAGEOUT_IO_SYNC to be used for
high-order allocations such as those used by fork(). It might have been an
oversight to allow order-1 to use PAGEOUT_IO_SYNC too easily.
> > That could manifest to the user as stalls creating new processes when under
> > heavy IO. I would be surprised it would freeze the entire system but certainly
> > any new work would feel very slow.
> >
> > > A simple dd can easily create a big range of dirty pages in the LRU
> > > list. Therefore priority can easily go below (DEF_PRIORITY - 2) in a
> > > typical desktop, which triggers the lumpy reclaim mode and hence
> > > wait_on_page_writeback().
> > >
> >
> > which triggers the lumpy reclaim mode for high-order allocations.
>
> Exactly. Changelog updated.
>
> > lumpy reclaim mode is not something that is triggered just because priority
> > is high.
>
> Right.
>
> > I think there is a second possibility for causing stalls as well that is
> > unrelated to lumpy reclaim. Once dirty_limit is reached, new page faults may
> > also result in stalls. If it is taking a long time to writeback dirty data,
> > random processes could be getting stalled just because they happened to dirty
> > data at the wrong time. This would be the case if the main dirtying process
> > (e.g. dd) is not calling sync and dropping pages it's no longer using.
>
> The dirty_limit throttling will slow down the dirty process to the
> writeback throughput. If a process is dirtying files on sda (HDD),
> it will be throttled at 80MB/s. If another process is dirtying files
> on sdb (USB 1.1), it will be throttled at 1MB/s.
>
It will slow down the dirty process doing the dd, but can it also slow
down other processes that just happened to dirty pages at the wrong
time.
> So dirty throttling will slow things down. However the slow down
> should be smooth (a series of 100ms stalls instead of a sudden 10s
> stall), and won't impact random processes (which does no read/write IO
> at all).
>
Ok.
> > > In Andreas' case, 512MB/1024 = 512KB, this is way too low comparing to
> > > the 22MB writeback and 190MB dirty pages. There can easily be a
> > > continuous range of 512KB dirty/writeback pages in the LRU, which will
> > > trigger the wait logic.
> > >
> > > To make it worse, when there are 50MB writeback pages and USB 1.1 is
> > > writing them in 1MB/s, wait_on_page_writeback() may stuck for up to 50
> > > seconds.
> > >
> > > So only enter sync write&wait when priority goes below DEF_PRIORITY/3,
> > > or 6.25% LRU. As the default dirty throttle ratio is 20%, sync write&wait
> > > will hardly be triggered by pure dirty pages.
> > >
> > > Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
> > > ---
> > > mm/vmscan.c | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > --- linux-next.orig/mm/vmscan.c 2010-07-22 16:36:58.000000000 +0800
> > > +++ linux-next/mm/vmscan.c 2010-07-22 17:03:47.000000000 +0800
> > > @@ -1206,7 +1206,7 @@ static unsigned long shrink_inactive_lis
> > > * but that should be acceptable to the caller
> > > */
> > > if (nr_freed < nr_taken && !current_is_kswapd() &&
> > > - sc->lumpy_reclaim_mode) {
> > > + sc->lumpy_reclaim_mode && priority < DEF_PRIORITY / 3) {
> > > congestion_wait(BLK_RW_ASYNC, HZ/10);
> > >
> >
> > This will also delay waiting on congestion for really high-order
> > allocations such as huge pages, some video decoder and the like which
> > really should be stalling.
>
> I absolutely agree that high order allocators should be somehow throttled.
>
> However given that one can easily create a large _continuous_ range of
> dirty LRU pages, let someone bumping all the way through the range
> sounds a bit cruel..
>
> > How about the following compile-tested diff?
> > It takes the cost of the high-order allocation into account and the
> > priority when deciding whether to synchronously wait or not.
>
> Very nice patch. Thanks!
>
Will you be picking it up or should I? The changelog should be more or less
the same as yours and consider it
Signed-off-by: Mel Gorman <mel@....ul.ie>
It'd be nice if the original tester is still knocking around and willing
to confirm the patch resolves his/her problem. I am running this patch on
my desktop at the moment and it does feel a little smoother but it might be
my imagination. I had trouble with odd stalls that I never pinned down and
was attributing to the machine being commonly heavily loaded but I haven't
noticed them today.
It also needs an Acked-by or Reviewed-by from Kosaki Motohiro as it alters
logic he introduced in commit [78dc583: vmscan: low order lumpy reclaim also
should use PAGEOUT_IO_SYNC]
Thanks
> <SNIP>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists