[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <76cf0aca5a2f8e9b94fd0631274a3d4ad825d077.camel@intel.com>
Date: Tue, 4 Sep 2018 07:01:58 +0000
From: "Yang, Bin" <bin.yang@...el.com>
To: "tglx@...utronix.de" <tglx@...utronix.de>
CC: "mingo@...nel.org" <mingo@...nel.org>,
"hpa@...or.com" <hpa@...or.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"Gross, Mark" <mark.gross@...el.com>,
"x86@...nel.org" <x86@...nel.org>,
"Hansen, Dave" <dave.hansen@...el.com>
Subject: Re: [PATCH v3 1/5] x86/mm: avoid redundant checking if pgprot has
no change
On Mon, 2018-09-03 at 23:57 +0200, Thomas Gleixner wrote:
> On Tue, 21 Aug 2018, Bin Yang wrote:
> > --- a/arch/x86/mm/pageattr.c
> > +++ b/arch/x86/mm/pageattr.c
> > @@ -629,6 +629,22 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
> > new_prot = static_protections(req_prot, address, pfn);
> >
> > /*
> > + * The static_protections() is used to check specific protection flags
> > + * for certain areas of memory. The old pgprot should be checked already
> > + * when it was applied before. If it's not, then this is a bug in some
> > + * other code and needs to be fixed there.
> > + *
> > + * If new pgprot is same as old pgprot, return directly without any
> > + * additional checking. The following static_protections() checking is
> > + * pointless if pgprot has no change. It can avoid the redundant
> > + * checking and optimize the performance of large page split checking.
> > + */
> > + if (pgprot_val(new_prot) == pgprot_val(old_prot)) {
>
> This is actually broken.
>
> Assume that for the start address:
>
> req_prot != old_prot
> and
> new_prot != req_prot
> and
> new_prot == old_prot
> and
> numpages > number_of_static_protected_pages(address)
>
> Then the new check will return with split = NO and the pages after the
> static protected area won't be updated -> FAIL! IOW, you partially
> reintroduce the bug which was fixed by adding this check loop.
>
> So this is a new optimization check which needs to be:
>
> if (pgprot_val(req_prot) == pgprot_val(old_prot))
>
> and that check wants to go above:
>
> new_prot = static_protections(req_prot, address, pfn);
thanks for your suggestion. I will fix it.
>
> Both under the assumption that old_prot is correct already.
>
> Now the question is whether this assumption can be made. The current code
> does that already today in case of page splits because it copies the
> existing pgprot of the large page unmodified over to the new split PTE
> page. IOW, if the current mapping is incorrect it will stay that way if
> it's not part of the actually modified range.
>
> I'm a bit worried about not having such a check, but if we add that then
> this should be done under a debug option for performance reasons.
>
> The last patch which does the overlap check is equally broken:
Sorry that I did not understand the broken of last patch. It checks the old prot
to make sure whether current mapping is correct as below:
WARN_ON_ONCE(needs_static_protections(old_prot, addr, psize, old_pfn));
If it is correct, the above assumption should be correct already. If not, we can split
the large page. It looks safe to split a wrong mapping large page. I prefer to change
above warning code as below:
if (needs_static_protections(old_prot, addr, psize, old_pfn)) {
WARN_ON_ONCE(1);
goto out_unlock;
}
>
> + /*
> + * Ensure that the requested pgprot does not violate static protection
> + * requirements.
> + */
> + new_prot = static_protections(req_prot, address,
> + numpages << PAGE_SHIFT, pfn);
>
> It expands new_prot to the whole range even if the protections only
> overlap. That should not happen in practice, but we have no checks for that
> at all.
Below code in patch #3 should cover this check. It will double check
new_prot in whole large page range.
if (needs_static_protections(new_prot, addr, psize, old_pfn))
goto out_unlock;
>
> The whole thing needs way more thought in order not to (re)introduce subtle
> and hard to debug bugs.
>
> Thanks,
>
> tglx
>
>
>
>
>
>
>
>
Powered by blists - more mailing lists