[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a6f28a45-ee0a-183f-fa60-28a56e1c506c@redhat.com>
Date: Mon, 1 Aug 2022 14:04:57 +0200
From: David Hildenbrand <david@...hat.com>
To: Charan Teja Kalla <quic_charante@...cinc.com>,
akpm@...ux-foundation.org, quic_pkondeti@...cinc.com,
pasha.tatashin@...een.com, sjpark@...zon.de, sieberf@...zon.com,
shakeelb@...gle.com, dhowells@...hat.com, willy@...radead.org,
liuting.0x7c00@...edance.com, minchan@...nel.org,
Michal Hocko <mhocko@...e.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH V2] mm: fix use-after free of page_ext after race with
memory-offline
On 01.08.22 13:50, Charan Teja Kalla wrote:
> Thanks David!!
>
> On 8/1/2022 2:00 PM, David Hildenbrand wrote:
>>> Having said that, I am open to go for call_rcu() and infact it will be a
>>> much simple change where I can do the freeing of page_ext in the
>>> __free_page_ext() itself which is called for every section there by
>>> avoid the extra tracking flag PAGE_EXT_INVALID.
>>> ...........
>>> WRITE_ONCE(ms->page_ext, NULL);
>>> call_rcu(rcu_head, fun); // Free in fun()
>>> .............
>>>
>>> Or your opinion is to use call_rcu () only once in place of
>>> synchronize_rcu() after invalidating all the page_ext's of memory block?
>>
>> Yeah, that would be an option. And if you fail to allocate a temporary
>> buffer to hold the data-to-free (structure containing rcu_head), the
>> slower fallback path would be synchronize_rcu().
>>
>
> I will add this as a note in the code that in future If some
> optimizations needs to be done in this path, this option can be
> considered. Hope this will be fine for now?
IMHO yes. But not need to add all these details to the patch description
(try keeping it short and precise). You can always just link to the
discussion, e.g., via
https://lkml.kernel.org/r/a26ce299-aed1-b8ad-711e-a49e82bdd180@quicinc.com
--
Thanks,
David / dhildenb
Powered by blists - more mailing lists