[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8c494db6-1def-44ea-84ef-51d0140bddf3@redhat.com>
Date: Thu, 31 Oct 2024 10:47:08 +0100
From: David Hildenbrand <david@...hat.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org, x86@...nel.org,
xingwei lee <xrivendell7@...il.com>, yuxin wang <wang1315768607@....com>,
Marius Fleischer <fleischermarius@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>, Andy Lutomirski
<luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, "H. Peter Anvin" <hpa@...or.com>,
Andrew Morton <akpm@...ux-foundation.org>, Ma Wupeng <mawupeng1@...wei.com>
Subject: Re: [PATCH v1] x86/mm/pat: fix VM_PAT handling when fork() fails in
copy_page_range()
On 30.10.24 22:32, Peter Xu wrote:
> On Tue, Oct 29, 2024 at 10:03:31PM +0100, David Hildenbrand wrote:
>> If track_pfn_copy() fails, we already added the dst VMA to the maple
>> tree. As fork() fails, we'll cleanup the maple tree, and stumble over
>> the dst VMA for which we neither performed any reservation nor copied
>> any page tables.
>>
>> Consequently untrack_pfn() will see VM_PAT and try obtaining the
>> PAT information from the page table -- which fails because the page
>> table was not copied.
>>
>> The easiest fix would be to simply clear the VM_PAT flag of the dst VMA
>> if track_pfn_copy() fails. However, the whole thing is about "simply"
>> clearing the VM_PAT flag is shaky as well: if we passed track_pfn_copy()
>> and performed a reservation, but copying the page tables fails, we'll
>> simply clear the VM_PAT flag, not properly undoing the reservation ...
>> which is also wrong.
>
> David,
>
Hi Peter,
> Sorry to not have chance yet reply to your other email..
>
> The only concern I have with the current fix to fork() is.. we started to
> have device drivers providing fault() on PFNMAPs as vfio-pci does, then I
> think it means we could potentially start to hit the same issue even
> without fork(), but as long as the 1st pgtable entry of the PFNMAP range is
> not mapped when the process with VM_PAT vma exit()s, or munmap() the vma.
As these drivers are not using remap_pfn_range, there is no way they
could currently get VM_PAT set.
So what you describe is independent of the current state we are fixing
here, and this fix should sort out the issues with current VM_PAT handling.
It indeed is an interesting question how to handle reservations when
*not* using remap_pfn_range() to cover the whole area.
remap_pfn_range() handles VM_PAT automatically because it can do it: it
knows that the whole range will map consecutive PFNs with the same
protection, and we expect not parts of the range suddenly getting
unmapped (and any driver that does that is buggy).
This behavior is, however, not guaranteed to be the case when
remap_pfn_range() is *not* called on the whole range.
For that case (i.e., vfio-pci) I still wonder if the driver shouldn't do
the reservation and leave VM_PAT alone.
In the driver, we'd do the reservation once and not worry about fork()
etc ... and we'd undo the reservation once the last relevant VM_PFNMAP
VMA is gone or the driver let's go of the device. I assume there are
already mechanisms in place to deal with that to some degree, because
the driver cannot go away while any VMA still has the VM_PFNMAP mapping
-- otherwise something would be seriously messed up.
Long story short: let's look into not using VM_PAT for that use case.
Looking at the VM_PAT issues we had over time, not making it more
complicated sounds like a very reasonable thing to me :)
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists