[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1907152047550.1767@nanos.tec.linutronix.de>
Date: Mon, 15 Jul 2019 20:48:22 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Joerg Roedel <jroedel@...e.de>
cc: Joerg Roedel <joro@...tes.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH 1/3] x86/mm: Check for pfn instead of page in
vmalloc_sync_one()
On Mon, 15 Jul 2019, Joerg Roedel wrote:
> On Mon, Jul 15, 2019 at 03:08:42PM +0200, Thomas Gleixner wrote:
> > On Mon, 15 Jul 2019, Joerg Roedel wrote:
> >
> > > From: Joerg Roedel <jroedel@...e.de>
> > >
> > > Do not require a struct page for the mapped memory location
> > > because it might not exist. This can happen when an
> > > ioremapped region is mapped with 2MB pages.
> > >
> > > Signed-off-by: Joerg Roedel <jroedel@...e.de>
> >
> > Lacks a Fixes tag, hmm?
>
> Yeah, right, the question is, which commit to put in there. The problem
> results from two changes:
>
> 1) Introduction of !SHARED_KERNEL_PMD path in x86-32. In itself
> this is not a problem, and the path was only enabled for
> Xen-PV.
>
> 2) Huge IORemapings which use the PMD level. Also not a problem
> by itself, but together with !SHARED_KERNEL_PMD problematic
> because it requires to sync the PMD entries between all
> page-tables, and that was not implemented.
>
> Before PTI-x32 was merged this problem did not show up, maybe because
> the 32-bit Xen-PV users did not trigger it. But with PTI-x32 all PAE
> users run with !SHARED_KERNEL_PMD and the problem popped up.
>
> For the last patch I put the PTI-x32 enablement commit in the fixes tag,
> because that was the one that showed up during bisection. But more
> correct would probably be
>
> 5d72b4fba40e ('x86, mm: support huge I/O mapping capability I/F')
Looks about right.
Thanks,
tglx
Powered by blists - more mailing lists