[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1248134546.30899.27.camel@pasglop>
Date: Tue, 21 Jul 2009 10:02:26 +1000
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: Nick Piggin <npiggin@...e.de>
Cc: Linux Memory Management <linux-mm@...ck.org>,
Linux-Arch <linux-arch@...r.kernel.org>,
linux-kernel@...r.kernel.org, linuxppc-dev@...abs.org,
Hugh Dickins <hugh@...cali.co.uk>
Subject: Re: [RFC/PATCH] mm: Pass virtual address to
[__]p{te,ud,md}_free_tlb()
On Mon, 2009-07-20 at 12:38 +0200, Nick Piggin wrote:
> On Mon, Jul 20, 2009 at 08:00:41PM +1000, Benjamin Herrenschmidt wrote:
> > On Mon, 2009-07-20 at 10:10 +0200, Nick Piggin wrote:
> > >
> > > Maybe I don't understand your description correctly. The TLB contains
> > > PMDs, but you say the HW still logically performs another translation
> > > step using entries in the PMD pages? If I understand that correctly,
> > > then generic mm does not actually care and would logically fit better
> > > if those entries were "linux ptes".
> >
> > They are :-)
> >
> > > The pte invalidation routines
> > > give the virtual address, which you could use to invalidate the TLB.
> >
> > For PTEs, yes, but not for those PMD entries. IE. I need the virtual
> > address when destroying PMDs so that I can invalidate those "indirect"
> > pages. PTEs are already taken care of by existing mechanisms.
>
> Hmm, so even after having invalidated all the pte translations
> then you still need to invalidate the empty indirect page? (or
> maybe you don't even invalidate the ptes if they're not cached
> in a TLB).
The PTEs are cached in the TLB (ie, they turn into normal TLB entries).
We need to invalidate the indirect entries when the PMD value change
(ie, when the PTE page is freed) or the TLB would potentially continue
loading PTEs from a stale PTE page :-)
Hence my patch adding the virtual address to pte_free_tlb() which is the
freeing of a PTE page. I'm adding it to the pmd/pud variants too for
consistency and because I believe there's no cost.
> I believe x86 is also allowed to cache higher level page tables
> in non-cache coherent storage, and I think it just avoids this
> issue by flushing the entire TLB when potentially tearing down
> upper levels. So in theory I think your patch could allow x86 to
> use invlpg more often as well (in practice the flush-all case
> and TLB refills are so fast in comparison with invlpg that it
> probably doesn't gain much especially when talking about
> invalidating upper levels). So making the generic VM more
> flexible like that is no problem for me.
Ah that's good to know. I don't know that much about the x86 way of
doing these things :-)
Cheers,
Ben.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists