[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f670c6ee-1c20-570f-68f9-42a3e1e85557@quicinc.com>
Date: Mon, 1 Aug 2022 17:20:19 +0530
From: Charan Teja Kalla <quic_charante@...cinc.com>
To: David Hildenbrand <david@...hat.com>, <akpm@...ux-foundation.org>,
<quic_pkondeti@...cinc.com>, <pasha.tatashin@...een.com>,
<sjpark@...zon.de>, <sieberf@...zon.com>, <shakeelb@...gle.com>,
<dhowells@...hat.com>, <willy@...radead.org>,
<liuting.0x7c00@...edance.com>, <minchan@...nel.org>,
Michal Hocko <mhocko@...e.com>
CC: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH V2] mm: fix use-after free of page_ext after race with
memory-offline
Thanks David!!
On 8/1/2022 2:00 PM, David Hildenbrand wrote:
>> Having said that, I am open to go for call_rcu() and infact it will be a
>> much simple change where I can do the freeing of page_ext in the
>> __free_page_ext() itself which is called for every section there by
>> avoid the extra tracking flag PAGE_EXT_INVALID.
>> ...........
>> WRITE_ONCE(ms->page_ext, NULL);
>> call_rcu(rcu_head, fun); // Free in fun()
>> .............
>>
>> Or your opinion is to use call_rcu () only once in place of
>> synchronize_rcu() after invalidating all the page_ext's of memory block?
>
> Yeah, that would be an option. And if you fail to allocate a temporary
> buffer to hold the data-to-free (structure containing rcu_head), the
> slower fallback path would be synchronize_rcu().
>
I will add this as a note in the code that in future If some
optimizations needs to be done in this path, this option can be
considered. Hope this will be fine for now?
> But again, I'm also not sure if we have to optimize here right now.
Thanks,
Charan
Powered by blists - more mailing lists