[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250423023703.632613-1-richard120310@gmail.com>
Date: Wed, 23 Apr 2025 10:37:03 +0800
From: I Hsin Cheng <richard120310@...il.com>
To: syzbot+de1498ff3a934ac5e8b4@...kaller.appspotmail.com,
viro@...iv.linux.org.uk,
brauner@...nel.org
Cc: jack@...e.cz,
linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org,
jfs-discussion@...ts.sourceforge.net,
shaggy@...nel.org,
syzkaller-bugs@...glegroups.com,
skhan@...uxfoundation.org,
linux-kernel-mentees@...ts.linux.dev,
I Hsin Cheng <richard120310@...il.com>
Subject: [RFC PATCH] fs/buffer: Handle non folio buffer case for drop_buffer()
When the folio doesn't have any buffers, "folio_buffers(folio)" will
return NULL, causing "buffer_busy(bh)" to dereference a null pointer.
Handle the case and jump to detach the folio if there's no buffer within
it.
Reported-by: syzbot+de1498ff3a934ac5e8b4@...kaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=de1498ff3a934ac5e8b4
Fixes: 6439476311a64 ("fs: Convert drop_buffers() to use a folio")
Signed-off-by: I Hsin Cheng <richard120310@...il.com>
---
syzbot reported a null pointer dereference issue. [1]
If the folio be sent into "drop_buffer()" doesn't have any buffers,
assigning "bh = head" will make "bh" to NULL, and the following
operation of cleaning the buffer will encounter null pointer
dereference.
I checked other use cases of "folio_buffers()", e.g. the one used in
"buffer_check_dirty_writeback()" [2]. They generally use the same
approach to check whether a folio_buffers() return NULL.
I'm not sure whether it's normal for a non-buffer folio to reach inside
"drop_buffers()", if it's not maybe we have to dig more into the problem
and find out where did the buffers of folio get freed or corrupted, let
me know if that's needed and what can I test to help. I'm new to fs
correct me if I'm wrong I'll be happy to learn, and know more about
what's the expected behavior or correct behavior for a folio, thanks !
[1]:
BUG: KASAN: null-ptr-deref in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: null-ptr-deref in atomic_read include/linux/atomic/atomic-instrumented.h:32 [inline]
BUG: KASAN: null-ptr-deref in buffer_busy fs/buffer.c:2881 [inline]
BUG: KASAN: null-ptr-deref in drop_buffers+0x6f/0x710 fs/buffer.c:2893
Read of size 4 at addr 0000000000000060 by task kswapd0/74
CPU: 0 UID: 0 PID: 74 Comm: kswapd0 Not tainted 6.12.0-rc1-syzkaller-00031-ge32cde8d2bd7 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
print_report+0xe8/0x550 mm/kasan/report.c:491
kasan_report+0x143/0x180 mm/kasan/report.c:601
kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
instrument_atomic_read include/linux/instrumented.h:68 [inline]
atomic_read include/linux/atomic/atomic-instrumented.h:32 [inline]
buffer_busy fs/buffer.c:2881 [inline]
drop_buffers+0x6f/0x710 fs/buffer.c:2893
try_to_free_buffers+0x295/0x5f0 fs/buffer.c:2947
shrink_folio_list+0x240c/0x8cc0 mm/vmscan.c:1432
evict_folios+0x549b/0x7b50 mm/vmscan.c:4583
try_to_shrink_lruvec+0x9ab/0xbb0 mm/vmscan.c:4778
shrink_one+0x3b9/0x850 mm/vmscan.c:4816
shrink_many mm/vmscan.c:4879 [inline]
lru_gen_shrink_node mm/vmscan.c:4957 [inline]
shrink_node+0x3799/0x3de0 mm/vmscan.c:5937
kswapd_shrink_node mm/vmscan.c:6765 [inline]
balance_pgdat mm/vmscan.c:6957 [inline]
kswapd+0x1ca3/0x3700 mm/vmscan.c:7226
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
</TASK>
[2]:https://elixir.bootlin.com/linux/v6.14.3/source/fs/buffer.c#L97
Best regards,
I Hsin Cheng
---
fs/buffer.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/buffer.c b/fs/buffer.c
index cc8452f60251..29fd17f78265 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2883,6 +2883,8 @@ drop_buffers(struct folio *folio, struct buffer_head **buffers_to_free)
struct buffer_head *head = folio_buffers(folio);
struct buffer_head *bh;
+ if (!head)
+ goto detach_folio;
bh = head;
do {
if (buffer_busy(bh))
@@ -2897,6 +2899,7 @@ drop_buffers(struct folio *folio, struct buffer_head **buffers_to_free)
__remove_assoc_queue(bh);
bh = next;
} while (bh != head);
+detach_folio:
*buffers_to_free = head;
folio_detach_private(folio);
return true;
--
2.43.0
Powered by blists - more mailing lists