[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100612001729.GE9946@csn.ul.ie>
Date: Sat, 12 Jun 2010 01:17:29 +0100
From: Mel Gorman <mel@....ul.ie>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
Nick Piggin <npiggin@...e.de>, Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 5/6] vmscan: Write out ranges of pages contiguous to
the inode where possible
On Fri, Jun 11, 2010 at 02:33:37PM -0700, Andrew Morton wrote:
> On Fri, 11 Jun 2010 21:44:11 +0100
> Mel Gorman <mel@....ul.ie> wrote:
>
> > > Well. The main problem is that we're doing too much IO off the LRU of
> > > course.
> > >
> >
> > What would be considered "too much IO"?
>
> Enough to slow things down ;)
>
I like it. We don't know what it is, but we'll know when we see it :)
> This problem used to hurt a lot. Since those times we've decreased the
> default value of /proc/sys/vm/dirty*ratio by a lot, which surely
> papered over this problem a lot. We shouldn't forget that those ratios
> _are_ tunable, after all. If we make a change which explodes the
> kernel when someone's tuned to 40% then that's a problem and we'll need
> to scratch our heads over the magnitude of that problem.
>
Ok. What could be done is finalise the tracepoints (they are counting some
stuff they shouldn't) and merge them. The can measure the amount of time
kswapd was awake but critically how long direct reclaim was going on. A test
could be monitor the tracepoints and vmstat, start whatever the workload is,
generate a report and see what percentage of time was spent in direct reclaim
in comparison to the total. For the IO, it would be a comparison of the IO
generated by page reclaim in comparison to total IO. We'd need to decide on
"goodness" values for these ratios but at least it would be measurable
and broadly speaking - the lower the better and preferably 0 for both.
> As for a workload which triggers the problem on a large machine which
> is tuned to 20%/10%: dunno. If we're reliably activating pages when
> dirtying them then perhaps it's no longer a problem with the default
> tuning. I'd do some testing with mem=256M though - that has a habit of
> triggering weirdnesses.
>
Will do. I was testing with 2G which is probably too much.
> btw, I'm trying to work out if zap_pte_range() really needs to run
> set_page_dirty(). Didn't (pte_dirty() && !PageDirty()) pages get
> themselves stamped out?
>
I don't remember anything specific in that area. Will check it out if
someone doesn't have the quick answer.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists