[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZPs8+sLv5oaubrKj@casper.infradead.org>
Date: Fri, 8 Sep 2023 16:25:46 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Mirsad Todorovac <mirsad.todorovac@....unizg.hr>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Christoph Hellwig <hch@....de>,
Sagi Grimberg <sagi@...mberg.me>,
linux-nvme@...ts.infradead.org
Subject: Re: BUG: KCSAN: data-race in folio_batch_move_lru / mpage_read_end_io
On Thu, Aug 31, 2023 at 03:52:49PM +0100, Matthew Wilcox wrote:
> > read to 0xffffef9a44978bc0 of 8 bytes by task 348 on cpu 12:
> > folio_batch_move_lru (./include/linux/mm.h:1814 ./include/linux/mm.h:1824 ./include/linux/memcontrol.h:1636 ./include/linux/memcontrol.h:1659 mm/swap.c:216)
> > folio_batch_add_and_move (mm/swap.c:235)
> > folio_add_lru (./arch/x86/include/asm/preempt.h:95 mm/swap.c:518)
> > folio_add_lru_vma (mm/swap.c:538)
> > do_anonymous_page (mm/memory.c:4146)
>
> This is the part I don't understand. The path to calling
> folio_add_lru_vma() comes directly from vma_alloc_zeroed_movable_folio():
>
[snip]
>
> (sorry that's a lot of lines). But there's _nowhere_ there that sets
> PG_locked. It's a freshly allocated page; all page flags (that are
> actually flags; ignore the stuff up at the top) should be clear. We
> even check that with PAGE_FLAGS_CHECK_AT_PREP. Plus, it doesn't
> make sense that we'd start I/O; the page is freshly allocated, full of
> zeroes; there's no backing store to read the page from.
>
> It really feels like this page was freed while it was still under I/O
> and it's been reallocated to this victim process.
>
> I'm going to try a few things and see if I can figure this out.
I'm having trouble reproducing this. Can you get it to happen reliably?
This is what I'm currently running with, and it doesn't trigger.
I'd expect it to if we were going to hit the KCSAN bug.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0c5be12f9336..d22e8798c326 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4439,6 +4439,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid,
page = __alloc_pages_slowpath(alloc_gfp, order, &ac);
out:
+ VM_BUG_ON_PAGE(page && (page->flags & (PAGE_FLAGS_CHECK_AT_PREP &~ (1 << PG_head))), page);
if (memcg_kmem_online() && (gfp & __GFP_ACCOUNT) && page &&
unlikely(__memcg_kmem_charge_page(page, gfp, order) != 0)) {
__free_pages(page, order);
Powered by blists - more mailing lists