[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1580995070-25139-1-git-send-email-cai@lca.pw>
Date: Thu, 6 Feb 2020 08:17:50 -0500
From: Qian Cai <cai@....pw>
To: akpm@...ux-foundation.org
Cc: jhubbard@...dia.com, ira.weiny@...el.com, dan.j.williams@...el.com,
jack@...e.cz, elver@...gle.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Qian Cai <cai@....pw>
Subject: [PATCH] mm: fix a data race in put_page()
page->flags could be accessed concurrently as noticied by KCSAN,
BUG: KCSAN: data-race in page_cpupid_xchg_last / put_page
write (marked) to 0xfffffc0d48ec1a00 of 8 bytes by task 91442 on cpu 3:
page_cpupid_xchg_last+0x51/0x80
page_cpupid_xchg_last at mm/mmzone.c:109 (discriminator 11)
wp_page_reuse+0x3e/0xc0
wp_page_reuse at mm/memory.c:2453
do_wp_page+0x472/0x7b0
do_wp_page at mm/memory.c:2798
__handle_mm_fault+0xcb0/0xd00
handle_pte_fault at mm/memory.c:4049
(inlined by) __handle_mm_fault at mm/memory.c:4163
handle_mm_fault+0xfc/0x2f0
handle_mm_fault at mm/memory.c:4200
do_page_fault+0x263/0x6f9
do_user_addr_fault at arch/x86/mm/fault.c:1465
(inlined by) do_page_fault at arch/x86/mm/fault.c:1539
page_fault+0x34/0x40
read to 0xfffffc0d48ec1a00 of 8 bytes by task 94817 on cpu 69:
put_page+0x15a/0x1f0
page_zonenum at include/linux/mm.h:923
(inlined by) is_zone_device_page at include/linux/mm.h:929
(inlined by) page_is_devmap_managed at include/linux/mm.h:948
(inlined by) put_page at include/linux/mm.h:1023
wp_page_copy+0x571/0x930
wp_page_copy at mm/memory.c:2615
do_wp_page+0x107/0x7b0
__handle_mm_fault+0xcb0/0xd00
handle_mm_fault+0xfc/0x2f0
do_page_fault+0x263/0x6f9
page_fault+0x34/0x40
Reported by Kernel Concurrency Sanitizer on:
CPU: 69 PID: 94817 Comm: systemd-udevd Tainted: G W O L 5.5.0-next-20200204+ #6
Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 07/10/2019
Both the read and write are done only with the non-exclusive mmap_sem
held. Since the read will check for specific bits (up to three bits for
now) in the flag, load tearing could in theory trigger a logic bug.
To fix it, it could introduce put_page_lockless() in those places but
that could be an overkill, and difficult to use. Thus, just add
READ_ONCE() for the read in page_zonenum() for now where it should not
affect the performance and correctness with a small trade-off that
compilers might generate less efficient optimization in some places.
Signed-off-by: Qian Cai <cai@....pw>
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 52269e56c514..f8529aa971c0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -920,7 +920,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
static inline enum zone_type page_zonenum(const struct page *page)
{
- return (page->flags >> ZONES_PGSHIFT) & ZONES_MASK;
+ return (READ_ONCE(page->flags) >> ZONES_PGSHIFT) & ZONES_MASK;
}
#ifdef CONFIG_ZONE_DEVICE
--
1.8.3.1
Powered by blists - more mailing lists