[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <813b1141-4050-ead7-ec52-a2b3b8e26fee@126.com>
Date: Mon, 17 Jun 2024 19:22:59 +0800
From: yangge1116 <yangge1116@....com>
To: David Hildenbrand <david@...hat.com>, Matthew Wilcox <willy@...radead.org>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>, 21cnbao@...il.com,
akpm@...ux-foundation.org, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
liuzixing@...on.cn
Subject: Re: [PATCH] mm/gup: don't check page lru flag before draining it
在 2024/6/17 下午5:52, David Hildenbrand 写道:
> Why would we want to make folio_maybe_dma_pinned() detection that worse
Just want to fix it using the existing function, seems a little
unreasonable. I will prepare the V2 using folio_test_lru(folio) to check.
static unsigned long collect_longterm_unpinnable_pages(...)
{
...
if (!folio_test_lru(folio) && drain_allow) {
lru_add_drain_all();
drain_allow = false;
}
...
}
void folio_mark_lazyfree(struct folio *folio)
{
if (folio_test_anon(folio) && folio_test_swapbacked(folio) &&
!folio_test_swapcache(folio) && !folio_test_unevictable(folio)) {
struct folio_batch *fbatch;
folio_get(folio);
if (!folio_test_clear_lru(folio)) {
folio_put(folio);
return;
}
local_lock(&cpu_fbatches.lock);
fbatch = this_cpu_ptr(&cpu_fbatches.lru_lazyfree);
folio_batch_add_and_move(fbatch, folio, lru_lazyfree_fn);
local_unlock(&cpu_fbatches.lock);
}
}
Powered by blists - more mailing lists