[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220209231406.187668-1-stephen.s.brennan@oracle.com>
Date: Wed, 9 Feb 2022 15:14:02 -0800
From: Stephen Brennan <stephen.s.brennan@...cle.com>
To: Alexander Viro <viro@...iv.linux.org.uk>, Jan Kara <jack@...e.cz>
Cc: Stephen Brennan <stephen.s.brennan@...cle.com>,
linux-kernel@...r.kernel.org, Luis Chamberlain <mcgrof@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, Arnd Bergmann <arnd@...db.de>,
Amir Goldstein <amir73il@...il.com>
Subject: [PATCH v2 0/4] Fix softlockup when adding inotify watch
Hi Al et al,
When a system with large amounts of memory has several millions of
negative dentries in a single directory, a softlockup can occur while
adding an inotify watch:
watchdog: BUG: soft lockup - CPU#20 stuck for 9s! [inotifywait:9528]
CPU: 20 PID: 9528 Comm: inotifywait Kdump: loaded Not tainted 5.16.0-rc4.20211208.el8uek.rc1.x86_64 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.4.1 12/03/2020
RIP: 0010:__fsnotify_update_child_dentry_flags+0xad/0x120
Call Trace:
<TASK>
fsnotify_add_mark_locked+0x113/0x160
inotify_new_watch+0x130/0x190
inotify_update_watch+0x11a/0x140
__x64_sys_inotify_add_watch+0xef/0x140
do_syscall_64+0x3b/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xae
This patch series is a modified version of the following:
https://lore.kernel.org/linux-fsdevel/1611235185-1685-1-git-send-email-gautham.ananthakrishna@oracle.com/
The strategy employed by this series is to move negative dentries to the
end of the d_subdirs list, and mark them with a flag as "tail negative".
Then, readers of the d_subdirs list, which are only interested in
positive dentries, can stop reading once they reach the first tail
negative dentry. By applying this patch, I'm able to avoid the above
softlockup caused by 200 million negative dentries on my test system.
Inotify watches are set up nearly instantly.
Previously, Al expressed concern for:
1. Possible memory corruption due to use of lock_parent() in
sweep_negative(), see patch 01 for fix.
2. The previous patch didn't catch all ways a negative dentry could
become positive (d_add, d_instantiate_new), see patch 01.
3. The previous series contained a new negative dentry limit, which
capped the negative dentry count at around 3 per hash bucket. I've
dropped this patch from the series.
Patches 2-4 are unmodified from the previous posting.
In v1 of the patch, a warning was triggered by patch 1:
https://lore.kernel.org/linux-fsdevel/20211218081736.GA1071@xsang-OptiPlex-9020/
I reproduced this warning, and verified it no longer occurs with my patch on
5.17 rc kernels. In particular, commit 29044dae2e74 ("fsnotify: fix fsnotify
hooks in pseudo filesystems") resolves the warning, which I verified on the
5.16 branch that the 0day bot tested. It seems that nfsdfs was using d_delete
to remove some pseudo-filesystem dentries, rather than d_drop, but it
expected there to never be negative dentries. I don't believe that
warning reflected an error in this patch series.
v2:
- explain the nfsd warning
- remove sweep_negative() call from __d_add - rely on dput() for that
Konstantin Khlebnikov (2):
dcache: add action D_WALK_SKIP_SIBLINGS to d_walk()
dcache: stop walking siblings if remaining dentries all negative
Stephen Brennan (2):
dcache: sweep cached negative dentries to the end of list of siblings
fsnotify: stop walking child dentries if remaining tail is negative
fs/dcache.c | 101 +++++++++++++++++++++++++++++++++++++++--
fs/libfs.c | 3 ++
fs/notify/fsnotify.c | 6 ++-
include/linux/dcache.h | 6 +++
4 files changed, 110 insertions(+), 6 deletions(-)
--
2.30.2
Powered by blists - more mailing lists