lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 30 Nov 2009 18:15:44 +0900 (JST)
From:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Cc:	kosaki.motohiro@...fujitsu.com,
	Hugh Dickins <hugh.dickins@...cali.co.uk>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Izik Eidus <ieidus@...hat.com>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Chris Wright <chrisw@...hat.com>, linux-kernel@...r.kernel.org,
	linux-mm@...ck.org
Subject: Re: [PATCH 2/9] ksm: let shared pages be swappable

> On Tue, 24 Nov 2009 16:42:15 +0000 (GMT)
> Hugh Dickins <hugh.dickins@...cali.co.uk> wrote:
> > +int page_referenced_ksm(struct page *page, struct mem_cgroup *memcg,
> > +			unsigned long *vm_flags)
> > +{
> > +	struct stable_node *stable_node;
> > +	struct rmap_item *rmap_item;
> > +	struct hlist_node *hlist;
> > +	unsigned int mapcount = page_mapcount(page);
> > +	int referenced = 0;
> > +	struct vm_area_struct *vma;
> > +
> > +	VM_BUG_ON(!PageKsm(page));
> > +	VM_BUG_ON(!PageLocked(page));
> > +
> > +	stable_node = page_stable_node(page);
> > +	if (!stable_node)
> > +		return 0;
> > +
> 
> Hmm. I'm not sure how many pages are shared in a system but
> can't we add some threshold for avoidng too much scan against shared pages ?
> (in vmscan.c)
> like..
>       
>        if (page_mapcount(page) > (XXXX >> scan_priority))
> 		return 1;
> 
> I saw terrible slow downs in shmem-swap-out in old RHELs (at user support).
> (Added kosaki to CC.)
> 
> After this patch, the number of shared swappable page will be unlimited.

Probably, it doesn't matter. I mean

  - KSM sharing and Shmem sharing are almost same performance characteristics.
  - if memroy pressure is low, SplitLRU VM doesn't scan anon list so much.

if ksm swap is too costly, we need to improve anon list scanning generically.


btw, I'm not sure why bellow kmem_cache_zalloc() is necessary. Why can't we
use stack?

----------------------------
+	/*
+	 * Temporary hack: really we need anon_vma in rmap_item, to
+	 * provide the correct vma, and to find recently forked instances.
+	 * Use zalloc to avoid weirdness if any other fields are involved.
+	 */
+	vma = kmem_cache_zalloc(vm_area_cachep, GFP_ATOMIC);
+	if (!vma) {
+		spin_lock(&ksm_fallback_vma_lock);
+		vma = &ksm_fallback_vma;
+	}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ