[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <222fc0b2-6ec0-98e7-833f-ea868b248446@redhat.com>
Date: Fri, 2 Dec 2022 13:07:02 +0100
From: David Hildenbrand <david@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Peter Xu <peterx@...hat.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Nadav Amit <nadav.amit@...il.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Ives van Hoorne <ives@...esandbox.io>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Alistair Popple <apopple@...dia.com>, stable@...r.kernel.org
Subject: Re: [PATCH v3 1/2] mm/migrate: Fix read-only page got writable when
recover pte
On 02.12.22 12:03, David Hildenbrand wrote:
> On 01.12.22 23:30, Andrew Morton wrote:
>> On Thu, 1 Dec 2022 16:42:52 +0100 David Hildenbrand <david@...hat.com> wrote:
>>
>>> On 01.12.22 16:28, Peter Xu wrote:
>>>>
>>>> I didn't reply here because I have already replied with the question in
>>>> previous version with a few attempts. Quotting myself:
>>>>
>>>> https://lore.kernel.org/all/Y3KgYeMTdTM0FN5W@x1n/
>>>>
>>>> The thing is recovering the pte into its original form is the
>>>> safest approach to me, so I think we need justification on why it's
>>>> always safe to set the write bit.
>>>>
>>>> I've also got another longer email trying to explain why I think it's the
>>>> other way round to be justfied, rather than justifying removal of the write
>>>> bit for a read migration entry, here:
>>>>
>>>
>>> And I disagree for this patch that is supposed to fix this hunk:
>>>
>>>
>>> @@ -243,11 +243,15 @@ static bool remove_migration_pte(struct page *page, struct vm_area_struct *vma,
>>> entry = pte_to_swp_entry(*pvmw.pte);
>>> if (is_write_migration_entry(entry))
>>> pte = maybe_mkwrite(pte, vma);
>>> + else if (pte_swp_uffd_wp(*pvmw.pte))
>>> + pte = pte_mkuffd_wp(pte);
>>>
>>> if (unlikely(is_zone_device_page(new))) {
>>> if (is_device_private_page(new)) {
>>> entry = make_device_private_entry(new, pte_write(pte));
>>> pte = swp_entry_to_pte(entry);
>>> + if (pte_swp_uffd_wp(*pvmw.pte))
>>> + pte = pte_mkuffd_wp(pte);
>>> }
>>> }
>>
>> David, I'm unclear on what you mean by the above. Can you please
>> expand?
>>
>>>
>>> There is really nothing to justify the other way around here.
>>> If it's broken fix it independently and properly backport it independenty.
>>>
>>> But we don't know about any such broken case.
>>>
>>> I have no energy to spare to argue further ;)
>>
>> This is a silent data loss bug, which is about as bad as it gets.
>> Under obscure conditions, fortunately. But please let's keep working
>> it. Let's aim for something minimal for backporting purposes. We can
>> revisit any cleanliness issues later.
>
> Okay, you activated my energy reserves.
>
>>
>> David, do you feel that the proposed fix will at least address the bug
>> without adverse side-effects?
>
> Usually, when I suspect something is dodgy I unconsciously push back
> harder than I usually would.
>
> I just looked into the issue once again and realized that this patch
> here (and also my alternative proposal) most likely tackles the
> more-generic issue from the wrong direction. I found yet another such
> bug (most probably two, just too lazy to write another reproducer).
> Migration code does the right thing here -- IMHO -- and the issue should
> be fixed differently.
>
> I'm testing an alternative patch right now and will share it later
> today, along with a reproducer.
>
mprotect() reproducer attached.
--
Thanks,
David / dhildenb
View attachment "uffd-wp-mprotect.c" of type "text/x-csrc" (3186 bytes)
Powered by blists - more mailing lists