lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4FD2C6C5.1070900@linaro.org>
Date:	Fri, 08 Jun 2012 20:45:09 -0700
From:	John Stultz <john.stultz@...aro.org>
To:	KOSAKI Motohiro <kosaki.motohiro@...il.com>
CC:	Dave Hansen <dave@...ux.vnet.ibm.com>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Android Kernel Team <kernel-team@...roid.com>,
	Robert Love <rlove@...gle.com>, Mel Gorman <mel@....ul.ie>,
	Hugh Dickins <hughd@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	Dave Chinner <david@...morbit.com>, Neil Brown <neilb@...e.de>,
	Andrea Righi <andrea@...terlinux.com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	Taras Glek <tgek@...illa.com>, Mike Hommey <mh@...ndium.org>,
	Jan Kara <jack@...e.cz>
Subject: Re: [PATCH 3/3] [RFC] tmpfs: Add FALLOC_FL_MARK_VOLATILE/UNMARK_VOLATILE
 handlers

On 06/07/2012 09:50 PM, KOSAKI Motohiro wrote:
> (6/7/12 11:03 PM), John Stultz wrote:
>
>> So I'm falling back to using a shrinker for now, but I think Dmitry's
>> point is an interesting one, and am interested in finding a better
>> place to trigger purging volatile ranges from the mm code. If anyone 
>> has any
>> suggestions, let me know, otherwise I'll go back to trying to better 
>> grok the mm code.
>
> I hate vm feature to abuse shrink_slab(). because of, it was not 
> designed generic callback.
> it was designed for shrinking filesystem metadata. Therefore, vm 
> keeping a balance between
> page scanning and slab scanning. then, a lot of shrink_slab misuse may 
> lead to break balancing
> logic. i.e. drop icache/dcache too many and makes perfomance impact.
>
> As far as a code impact is small, I'm prefer to connect w/ vm reclaim 
> code directly.

I can see your concern about mis-using the shrinker code. Also your 
other email's point about the problem of having LRU range purging 
behavior on a NUMA system makes some sense too.  Unfortunately I'm not 
yet familiar enough with the reclaim core to sort out how to best track 
and connect the volatile range purging in the vm's reclaim core yet.

So for now, I've moved the code back to using the shrinker (along with 
fixing a few bugs along the way).
Thus, currently we manage the ranges as so:
     [per fs volatile range lru head] -> [volatile range] -> [volatile 
range] -> [volatile range]
With the per-fs shrinker zaping the volatile ranges from the lru.

I *think* ideally, the pages in a volatile range should be similar to 
non-dirty file-backed pages.  There is a cost to restore them, but 
freeing them is very cheap.  The trick is that volatile ranges 
introduces a new relationship between pages. Since the neighboring 
virtual pages in a volatile range are in effect tied together, purging 
one effectively ruins the value of keeping the others, regardless of 
which zone they are physically.

So maybe the right appraoch give up the per-fs volatile range lru, and 
try a varient of what DaveC and DaveH have suggested: Letting the page 
based lru reclamation handle the selection on a physical page basis, but 
then zapping the entirety of the neighboring range if any one page is 
reclaimed.  In order to try to preserve the range based LRU behavior, 
activate all the pages in the range together when the range is marked 
volatile.  Since we assume ranges are un-touched when volatile, that 
should preserve LRU purging behavior on single node systems and on 
multi-node systems it will approximate fairly closely.

My main concern with this approach is marking and unmarking volatile 
ranges needs to be fast, so I'm worried about the additional overhead of 
activating each of the containing pages on mark_volatile.

The other question I have with this approach is if we're on a system 
that doesn't have swap, it *seems* (not totally sure I understand it 
yet) the tmpfs file pages will be skipped over when we call 
shrink_lruvec.  So it seems we may need to add a new lru_list enum and 
nr[] entry (maybe LRU_VOLATILE?).   So then it may be that when we mark 
a range as volatile, instead of just activating it, we move it to the 
volatile lru, and then when we shrink from that list, we call back to 
the filesystem to trigger the entire range purging.

Does that sound reasonable?  Any other suggested approaches?  I'll think 
some more about it this weekend and try to get a patch scratched out 
early next week.

thanks
-john













--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ