[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260121182920.hfen6vsf5o27wi3z@inspiron>
Date: Wed, 21 Jan 2026 23:59:20 +0530
From: Prithvi <activprithvi@...il.com>
To: a.hindborg@...nel.org, leitao@...ian.org
Cc: bvanassche@....org, martin.petersen@...cle.com,
linux-scsi@...r.kernel.org, target-devel@...r.kernel.org,
linux-kernel@...r.kernel.org, hch@....de, jlbec@...lplan.org,
linux-fsdevel@...r.kernel.org, linux-kernel-mentees@...ts.linux.dev,
skhan@...uxfoundation.org, david.hunter.linux@...il.com,
khalid@...nel.org,
syzbot+f6e8174215573a84b797@...kaller.appspotmail.com,
stable@...r.kernel.org
Subject: Re: [PATCH] scsi: target: Fix recursive locking in
__configfs_open_file()
On Wed, Jan 21, 2026 at 11:21:46PM +0530, Prithvi wrote:
> On Tue, Jan 20, 2026 at 05:48:16AM -0800, Bart Van Assche wrote:
> > On 1/19/26 10:50 AM, Prithvi wrote:
> > > Possible unsafe locking scenario:
> > >
> > > CPU0
> > > ----
> > > lock(&p->frag_sem);
> > > lock(&p->frag_sem);
> > The least intrusive way to suppress this type of lockdep complaints is
> > by using lockdep_register_key() and lockdep_unregister_key().
> >
> > Thanks,
> >
> > Bart.
>
> Hello Bart,
>
> I tried using lockdep_register_key() and lockdep_unregister_key() for the
> frag_sem lock, however it stil gives the possible recursive locking
> warning. Here is the patch and the bug report from its test:
>
> https://lore.kernel.org/all/6767d8ea.050a0220.226966.0021.GAE@google.com/T/#m3203ceddf3423b7116ba9225d182771608f93a6f
>
> Would using down_read_nested() and subclasses be a better option here?
>
> I also checked out some documentation regarding it and learnt that to use
> the _nested() form, the hierarchy among the locks should be mapped
> accurately; however, IIUC, there isn't any hierarchy between the locks in
> this case, is this right?
>
> Apologies if I am missing something obvious here, and thanks for your
> time and guidance.
>
> Best Regards,
> Prithvi
Hello Andreas and Breno,
This thread is regarding a patch for fixing possible deadlock in
__configfs_open_file(); here is its dashboard link:
https://syzkaller.appspot.com/bug?extid=f6e8174215573a84b797
Firstly, flush_write_buffer() is called, which acquires frag_sem lock,
and then it calls the loaded store function, which, in this case, is
target_core_item_dbroot_store(). target_core_item_dbroot_store() calls
filp_open(), which ultimately calls configfs_write_iter() and in it,
the thread tries to acquire frag_sem lock again, posing a possibility of
recursive locking.
In the initial patch, I tried to fix this by replacing the call to
filp_open() in target_core_item_dbroot_store() with kern_path(), since
it performs the task of checking if the file path exists, rather than
opening the file using configfs_write_iter(). This also avoids acquiring
frag_sem in nested manner and thus possibiliy of recursive locking is
prevented.
After checking I found 3 functions where down-write() is used, which,
IIUC they might contribute to recursive locking:
1. configfs_rmdir() - calls down_write_killable(&frag->frag_sem)
2. configfs_unregister_group() - calls down_write(&frag->frag_sem);
3. configfs_unregister_subsystem() - calls down_write(&frag->frag_sem);
Bart suggested that this can be a fals positive and can be solved using
lockdep_register_key() and lockdep_unregister_key(). However, on trying
this approach, the possibile recursive locking warning persisted, it can
be found here:
https://lore.kernel.org/all/6767d8ea.050a0220.226966.0021.GAE@google.com/T/#m3203ceddf3423b7116ba9225d182771608f93a6f
IIUC, we can then use down_read_nested() and lock subclasses here; but,
according to documentation:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/locking/lockdep-design.rst#n230
I learnt that to use the _nested() form, the hierarchy among the locks
should be mapped accurately; however, from the lockdep documentation,
my understanding is that there isn't any hierarchy between the locks
in this case.
Your guidance on whether the kern_path based fix is the right direction here,
or if there is a more appropriate way to handle this from a configfs or VFS
point of view would be very valuable.
Thank you for your time and guidance.
Best Regards,
Prithvi
Powered by blists - more mailing lists