[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180427073719.GT15462@8bytes.org>
Date: Fri, 27 Apr 2018 09:37:20 +0200
From: "joro@...tes.org" <joro@...tes.org>
To: "Kani, Toshi" <toshi.kani@....com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"bp@...e.de" <bp@...e.de>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"guohanjun@...wei.com" <guohanjun@...wei.com>,
"wxf.wang@...ilicon.com" <wxf.wang@...ilicon.com>,
"stable@...r.kernel.org" <stable@...r.kernel.org>,
"x86@...nel.org" <x86@...nel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"willy@...radead.org" <willy@...radead.org>,
"hpa@...or.com" <hpa@...or.com>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"mingo@...hat.com" <mingo@...hat.com>,
"will.deacon@....com" <will.deacon@....com>,
"Hocko, Michal" <MHocko@...e.com>,
"cpandya@...eaurora.org" <cpandya@...eaurora.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH v2 2/2] x86/mm: implement free pmd/pte page interfaces
On Thu, Apr 26, 2018 at 10:30:14PM +0000, Kani, Toshi wrote:
> Thanks for the clarification. After reading through SDM one more time, I
> agree that we need a TLB purge here. Here is my current understanding.
>
> - INVLPG purges both TLB and paging-structure caches. So, PMD cache was
> purged once.
> - However, processor may cache this PMD entry later in speculation
> since it has p-bit set. (This is where my misunderstanding was.
> Speculation is not allowed to access a target address, but it may still
> cache this PMD entry.)
> - A single INVLPG on each processor purges this PMD cache. It does not
> need a range purge (which was already done).
>
> Does it sound right to you?
The right fix is to first synchronize the changes when the PMD/PUD is
cleared and then flush the TLB system-wide. After that is done you can
free the page.
But doing all that in the pud/pmd_free_pmd/pte_page() functions is too
expensive, as the TLB flush requires to send IPIs to all cores in the
system, and that every time the function is called.
So what needs to be done is to fix this from high-level ioremap code to
first unmap all required PTE/PMD pages and collect them in a list. When
that is done you can synchronize the changes with the other page-tables
in the system and do one system-wide TLB flush. When that is complete
you can free the pages on the list that were collected while unmapping.
Then the new mappings can be established and again synchronized with the
other page-tables in the system.
> As for the BUG_ON issue, are you able to reproduce this issue? If so,
> would you be able to test the fix?
Yes, I can reproduce the BUG_ON with my PTI patches and a fedora-i386
VM.
I already ran into the issue before your patches were merged upstream,
but my "fix" is different because it just prevents huge-mappings when
there were smaller mappings before. See
e3e288121408 x86/pgtable: Don't set huge PUD/PMD on non-leaf entries
for details. This patch does not fix the base-problem, but hides it
again, as the real fix needs some more work across architectures.
Your patch actually makes the problem worse, without it the PTE/PMD pages
were just leaked, so that they could not be reused. But with your patch
the pages can be used again and the page-walker might establish TLB
entries based on random content the new owner writes to it. This can
lead to all kinds of random and very hard to debug data corruption
issues.
So until we make the generic ioremap code in lib/ioremap.c smarter about
unmapping/remapping ranges the best solution is making my fix work again
by reverting your patch.
Thanks,
Joerg
Powered by blists - more mailing lists