[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zfg0WLrcOmCtdn_M@casper.infradead.org>
Date: Mon, 18 Mar 2024 12:32:24 +0000
From: Matthew Wilcox <willy@...radead.org>
To: 黄朝阳 (Zhaoyang Huang) <zhaoyang.huang@...soc.com>
Cc: Zhaoyang Huang <huangzhaoyang@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
康纪滨 (Steve Kang) <Steve.Kang@...soc.com>
Subject: Re: summarize all information again at bottom//reply: reply: [PATCH]
mm: fix a race scenario in folio_isolate_lru
Stop creating new threads. You're really annoying.
On Mon, Mar 18, 2024 at 09:32:32AM +0000, 黄朝阳 (Zhaoyang Huang) wrote:
> Summarize all information below to make it more clear(remove thread2 which is not mandatory and make the scenario complex)
You've gone back to over-indenting. STOP IT.
> #thread 0(madivise_cold_and_pageout) #thread1(truncate_inode_pages_range)
This is still an impossible race, and it's the third time I've told you
this. And madivise_cold_and_pageout does not exist, it's
madvise_cold_or_pageout_pte_range(). I'm going to stop responding to
your emails if you keep on uselessly repeating the same mistakes.
So, once again,
For madvise_cold_or_pageout_pte_range() to find a page, it must have
a PTE pointing to the page. That means there's a mapcount on the page.
That means there's a refcount on the page.
truncate_inode_pages_range() will indeed attempt to remove a page from
the page cache. BUT before it does that, it has to shoot down TLB
entries that refer to the affected folios. That happens like this:
for (i = 0; i < folio_batch_count(&fbatch); i++)
truncate_cleanup_folio(fbatch.folios[i]);
truncate_cleanup_folio() -> unmap_mapping_folio ->
unmap_mapping_range_tree() -> unmap_mapping_range_vma() ->
zap_page_range_single() -> unmap_single_vma -> unmap_page_range ->
zap_p4d_range -> zap_pud_range -> zap_pmd_range -> zap_pte_range ->
pte_offset_map_lock()
> pte_offset_map_lock takes NO lock
> truncate_inode_folio(refcnt == 2)
> <decrease the refcnt of page cache>
> folio_isolate_lru(refcnt == 1)
> release_pages(refcnt == 1)
> folio_test_clear_lru
> <remove folio's PG_lru>
> folio_put_testzero == true
> folio_get(refer to isolation)
> folio_test_lru == false
> <No lruvec_del_folio>
> list_add(folio->lru, pages_to_free)
> ****current folio will break LRU's integrity since it has not been deleted****
>
> 0. Folio's refcnt decrease from 2 to 1 by filemap_remove_folio
> 1. thread 0 calls folio_isolate_lru with refcnt == 1. Folio comes from vm's pte
> 2. thread 1 calls release_pages with refcnt == 1. Folio comes from address_space
> (refcnt == 1 make sense for both of folio_isolate_lru and release_pages)
> 3. thread0 clear folio's PG_lru by folio_test_clear_lru
> 4. thread1 decrease folio's refcnt from 1 to 0 and get permission to proceed
> 5. thread1 failed in folio_test_lru and do no list_del(folio)
> 6. thread1 add folio to pages_to_free wrongly which break the LRU's->list
> 7. next folio after current one within thread1 experiences list_del_invalid when calling lruvec_del_folio
Powered by blists - more mailing lists