[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20100325012316.GB27304@cmpxchg.org>
Date: Thu, 25 Mar 2010 02:23:16 +0100
From: Johannes Weiner <hannes@...xchg.org>
To: Andrea Arcangeli <aarcange@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [rfc 5/5] mincore: transparent huge page support
On Wed, Mar 24, 2010 at 11:48:58PM +0100, Andrea Arcangeli wrote:
> On Tue, Mar 23, 2010 at 03:35:02PM +0100, Johannes Weiner wrote:
> > +static int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> > + unsigned long addr, unsigned long end,
> > + unsigned char *vec)
> > +{
> > + int huge = 0;
> > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > + spin_lock(&vma->vm_mm->page_table_lock);
> > + if (likely(pmd_trans_huge(*pmd))) {
> > + huge = !pmd_trans_splitting(*pmd);
>
> Under mmap_sem (read or write) a hugepage can't materialize under
> us. So here the pmd_trans_huge can be lockless and run _before_ taking
> the page_table_lock. That's the invariant I used to keep identical
> performance for all fast paths.
Wait, there _is_ an unlocked fast-path pmd_trans_huge()
in mincore_pmd_range(), maybe you missed it?
This function is never called if the pmd is not huge.
So the above is the _second check_ under lock to get a stable
read on the entry that could be splitting or already have been
split while we checked locklessly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists