[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f962cfb2-0e30-741a-0a56-e3e2558b69c5@huawei.com>
Date: Wed, 8 Jun 2022 22:09:52 +0800
From: Miaohe Lin <linmiaohe@...wei.com>
To: Matthew Wilcox <willy@...radead.org>
CC: <akpm@...ux-foundation.org>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/vmscan: don't try to reclaim freed folios
On 2022/5/27 23:02, Matthew Wilcox wrote:
> On Fri, May 27, 2022 at 04:04:51PM +0800, Miaohe Lin wrote:
>> If folios were freed from under us, there's no need to reclaim them. Skip
>> these folios to save lots of cpu cycles and avoid possible unnecessary
>> disk IO.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
>> ---
>> mm/vmscan.c | 8 +++++++-
>> 1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index f7d9a683e3a7..646dd1efad32 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1556,12 +1556,18 @@ static unsigned int shrink_page_list(struct list_head *page_list,
>> folio = lru_to_folio(page_list);
>> list_del(&folio->lru);
>>
>> + nr_pages = folio_nr_pages(folio);
>> + if (folio_ref_count(folio) == 1) {
>> + /* folio was freed from under us. So we are done. */
>> + WARN_ON(!folio_put_testzero(folio));
>
> What? No. This can absolutely happen. We have a refcount on the folio,
> which means that any other thread can temporarily raise the refcount,
> so this WARN_ON can trigger. Also, we don't hold the folio locked,
> or an extra reference, so nr_pages is unstable because it can be split.
When I reread the code, I found caller holds an extra reference to the folio when
calling isolate_lru_pages(), so folio can't be split and thus nr_pages should be
stable indeed? Or am I miss something again?
Thanks!
>
>> + goto free_it;
>> + }
>> +
>> if (!folio_trylock(folio))
>> goto keep;
>>
>> VM_BUG_ON_FOLIO(folio_test_active(folio), folio);
>>
>> - nr_pages = folio_nr_pages(folio);
>>
>> /* Account the number of base pages */
>> sc->nr_scanned += nr_pages;
>> --
>> 2.23.0
>>
>>
>
> .
>
Powered by blists - more mailing lists