[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0911301227530.24660@sister.anvils>
Date: Mon, 30 Nov 2009 12:38:33 +0000 (GMT)
From: Hugh Dickins <hugh.dickins@...cali.co.uk>
To: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Izik Eidus <ieidus@...hat.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Chris Wright <chrisw@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 2/9] ksm: let shared pages be swappable
On Mon, 30 Nov 2009, KOSAKI Motohiro wrote:
> > After this patch, the number of shared swappable page will be unlimited.
>
> Probably, it doesn't matter. I mean
>
> - KSM sharing and Shmem sharing are almost same performance characteristics.
> - if memroy pressure is low, SplitLRU VM doesn't scan anon list so much.
>
> if ksm swap is too costly, we need to improve anon list scanning generically.
Yes, we're in agreement that this issue is not new with KSM swapping.
> btw, I'm not sure why bellow kmem_cache_zalloc() is necessary. Why can't we
> use stack?
Well, I didn't use stack: partly because I'm so ashamed of the pseudo-vmas
on the stack in mm/shmem.c, which have put shmem_getpage() into reports
of high stack users (I've unfinished patches to deal with that); and
partly because page_referenced_ksm() and try_to_unmap_ksm() are on
the page reclaim path, maybe way down deep on a very deep stack.
But it's not something you or I should be worrying about: as the comment
says, this is just a temporary hack, to present a patch which gets KSM
swapping working in an understandable way, while leaving some corrections
and refinements to subsequent patches. This pseudo-vma is removed in the
very next patch.
Hugh
>
> ----------------------------
> + /*
> + * Temporary hack: really we need anon_vma in rmap_item, to
> + * provide the correct vma, and to find recently forked instances.
> + * Use zalloc to avoid weirdness if any other fields are involved.
> + */
> + vma = kmem_cache_zalloc(vm_area_cachep, GFP_ATOMIC);
> + if (!vma) {
> + spin_lock(&ksm_fallback_vma_lock);
> + vma = &ksm_fallback_vma;
> + }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists