[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <734977390eeecba39789df939a00904e87367e5e.camel@huaweicloud.com>
Date: Fri, 27 Sep 2024 14:18:10 +0200
From: Roberto Sassu <roberto.sassu@...weicloud.com>
To: Paul Moore <paul@...l-moore.com>, Mimi Zohar <zohar@...ux.ibm.com>,
Roberto Sassu <roberto.sassu@...wei.com>, Casey Schaufler
<casey@...aufler-ca.com>, syzbot
<syzbot+listfc277c7cb94932601d96@...kaller.appspotmail.com>, Kent
Overstreet <kent.overstreet@...ux.dev>
Cc: linux-kernel@...r.kernel.org, linux-security-module@...r.kernel.org,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] Monthly lsm report (Sep 2024)
On Tue, 2024-09-24 at 13:53 +0200, Roberto Sassu wrote:
> On Mon, 2024-09-23 at 08:06 -0400, Paul Moore wrote:
> > On Mon, Sep 23, 2024 at 5:02 AM syzbot
> > <syzbot+listfc277c7cb94932601d96@...kaller.appspotmail.com> wrote:
> > >
> > > Hello lsm maintainers/developers,
> > >
> > > This is a 31-day syzbot report for the lsm subsystem.
> > > All related reports/information can be found at:
> > > https://syzkaller.appspot.com/upstream/s/lsm
> > >
> > > During the period, 0 new issues were detected and 0 were fixed.
> > > In total, 4 issues are still open and 27 have been fixed so far.
> > >
> > > Some of the still happening issues:
> > >
> > > Ref Crashes Repro Title
> > > <1> 306 No INFO: task hung in process_measurement (2)
> > > https://syzkaller.appspot.com/bug?extid=1de5a37cb85a2d536330
> >
> > Mimi, Roberto,
> >
> > Any chance this is this related in any way to this report:
> >
> > https://lore.kernel.org/linux-security-module/CALAgD-4hkHVcCq2ycdwnA2hYDBMqijLUOfZgvf1WfFpU-8+42w@mail.gmail.com/
>
> I reproduced the last, but I got a different result (the kernel crashed
> in a different place).
>
> It seems a corruption case, while the former looks more a lock
> inversion issue. Will check more.
+ Kent Overstreet
https://syzkaller.appspot.com/bug?extid=1de5a37cb85a2d536330
It happens few times per day, since commit 4a39ac5b7d62 (which is
followed by a lot of merges). The bug has been likely introduced there.
In all recent reports, I noticed that there is always the following
lock sequence:
[ 291.584319][ T30] 5 locks held by syz.0.75/5970:
[ 291.594487][ T30] #0: ffff888064066420 (sb_writers#25){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90
[ 291.603984][ T30] #1: ffff88805d8b0148 (&sb->s_type->i_mutex_key#30){++++}-{3:3}, at: do_truncate+0x20c/0x310
[ 291.614497][ T30] #2: ffff888054700a38 (&c->snapshot_create_lock){.+.+}-{3:3}, at: bch2_truncate+0x16d/0x2c0
[ 291.624871][ T30] #3: ffff888054704398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7de/0xd20
[ 291.635446][ T30] #4: ffff8880547266d0 (&c->gc_lock){.+.+}-{3:3}, at: bch2_btree_update_start+0x682/0x14e0
IMA is stuck too, since it is waiting for the inode lock to be released:
[ 291.645689][ T30] 1 lock held by syz.0.75/6010:
[ 291.650622][ T30] #0: ffff88805d8b0148 (&sb->s_type->i_mutex_key#30){++++}-{3:3}, at: process_measurement+0x439/0x1fb0
It seems that the super block is locked by someone else, which is not
able to unlock. Maybe, it is related to bch2_journal_reclaim_thread(),
but I don't know for sure.
Kent, do you have time to look at this report?
Thanks!
Roberto
Powered by blists - more mailing lists