lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 7 Jul 2010 10:43:11 +0100
From:	Mel Gorman <mel@....ul.ie>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	Minchan Kim <minchan.kim@...il.com>,
	Johannes Weiner <hannes@...xchg.org>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	linux-mm@...ck.org, Dave Chinner <david@...morbit.com>,
	Chris Mason <chris.mason@...cle.com>,
	Nick Piggin <npiggin@...e.de>, Rik van Riel <riel@...hat.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [PATCH 12/14] vmscan: Do not writeback pages in direct reclaim

On Tue, Jul 06, 2010 at 09:15:33PM -0400, Christoph Hellwig wrote:
> On Wed, Jul 07, 2010 at 01:24:58AM +0100, Mel Gorman wrote:
> > What I have now is direct writeback for anon files. For files be it from
> > kswapd or direct reclaim, I kick writeback pre-emptively by an amount based
> > on the dirty pages encountered because monitoring from systemtap indicated
> > that we were getting a large percentage of the dirty file pages at the end
> > of the LRU lists (bad). Initial tests show that page reclaim writeback is
> > reduced from kswapd by 97% with this sort of pre-emptive kicking of flusher
> > threads based on these figures from sysbench.
> 
> That sounds like yet another bad aid to me.  Instead it would be much
> better to not have so many file pages at the end of LRU by tuning the
> flusher threads and VM better.
> 

Do you mean "so many dirty file pages"? I'm going to assume you do.

How do you suggest tuning this? The modification I tried was "if N dirty
pages are found during a SWAP_CLUSTER_MAX scan of pages, assume an average
dirtying density of at least that during the time those pages were inserted on
the LRU. In response, ask the flushers to flush 1.5X". This roughly responds
to the conditions it finds as they are encountered and is based on scanning
rates instead of time. It seemed like a reasonable option.

Based on what I've seen, we are generally below the dirty_ratio and the
flushers are behaving as expected so there is little tuning available there. As
new dirty pages are added to the inactive list, they are allowed to reach the
bottom of the LRU before the periodic sync kicks in. From what I can tell,
it's already the case that flusher threads are cleaning the oldest inodes
first and I'd expect there to be a rough correlation between oldest inode
and oldest pages.

We could reduce the dirty_ratio but people already complain about workloads
that do not allow enough pages to be dirtied. We could decrease the sync
time for flusher threads but then it might be starting IO sooner than it
should and it might be unnecessary if the system is under no memory pressure.

Alternatives?

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ