lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0911301141010.20054@sister.anvils>
Date:	Mon, 30 Nov 2009 11:55:53 +0000 (GMT)
From:	Hugh Dickins <hugh.dickins@...cali.co.uk>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Izik Eidus <ieidus@...hat.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Chris Wright <chrisw@...hat.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org,
	"kosaki.motohiro@...fujitsu.com" <kosaki.motohiro@...fujitsu.com>
Subject: Re: [PATCH 2/9] ksm: let shared pages be swappable

On Mon, 30 Nov 2009, KAMEZAWA Hiroyuki wrote:
> 
> Hmm. I'm not sure how many pages are shared in a system but
> can't we add some threshold for avoidng too much scan against shared pages ?
> (in vmscan.c)
> like..
>       
>        if (page_mapcount(page) > (XXXX >> scan_priority))
> 		return 1;
> 
> I saw terrible slow downs in shmem-swap-out in old RHELs (at user support).
> (Added kosaki to CC.)
> 
> After this patch, the number of shared swappable page will be unlimited.

I don't think KSM swapping changes the story here at all: I don't
think it significantly increases the likelihood of pages with very
high mapcounts on the LRUs.  You've met the issue with shmem, okay,
I've always thought shared library text pages would be a problem.

I've often thought that some kind of "don't bother if the mapcount is
too high" check in vmscan.c might help - though I don't think I've
ever noticed the bugreport it would help with ;)

I used to imagine doing up to a certain number inside the rmap loops
and then breaking out (that would help with those reports of huge
anon_vma lists); but that would involve starting the next time from
where we left off, which would be difficult with the prio_tree.

Your proposal above (adjusting the limit according to scan_priority,
yes that's important) looks very promising to me.

Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ