[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACT4Y+ZEGHp9vg68O7dDfRWj+_bKZQCGQX2AcFckm8GBYaREug@mail.gmail.com>
Date: Thu, 22 Mar 2018 15:47:41 +0100
From: Dmitry Vyukov <dvyukov@...gle.com>
To: Leon Romanovsky <leonro@...lanox.com>
Cc: syzbot <syzbot+3b4acab09b6463472d0a@...kaller.appspotmail.com>,
danielj@...lanox.com, dledford@...hat.com,
Jason Gunthorpe <jgg@...pe.ca>,
Johannes Berg <johannes.berg@...el.com>,
LKML <linux-kernel@...r.kernel.org>, linux-rdma@...r.kernel.org,
monis@...lanox.com, Paolo Abeni <pabeni@...hat.com>,
parav@...lanox.com, Roland Dreier <roland@...estorage.com>,
syzkaller-bugs@...glegroups.com, yuval.shaia@...cle.com
Subject: Re: WARNING: ODEBUG bug in process_one_req
On Thu, Mar 22, 2018 at 3:29 PM, Leon Romanovsky <leonro@...lanox.com> wrote:
> On Thu, Mar 22, 2018 at 02:36:55PM +0100, Dmitry Vyukov wrote:
>> On Thu, Mar 22, 2018 at 2:20 PM, Leon Romanovsky <leonro@...lanox.com> wrote:
>> > On Thu, Mar 22, 2018 at 02:10:21PM +0100, Dmitry Vyukov wrote:
>> >> This bug is actively happening several times per day, if such bug is
>> >> closed syzbot will open another bug on the next crash.
>> >> This bug has a reproducer, why do you think it is invalid?
>> >
>> > I tried to reproduce it on latest rdma-next and failed, so wanted to
>> > start from clean page and see if it is still relevant.
>>
>> Ah, I see.
>> You can now see the current status on dashboard:
>> https://syzkaller.appspot.com/bug?id=d4ac7bfeafac8a3d6d06123e078462ac765415e7
>> Yes, it still happens.
>>
>> syzbot has already reported another incarnation of this bug, you can
>> see the link to dashboard in the email:
>> https://syzkaller.appspot.com/bug?extid=b8d5eb964e412cfd2678
>> and it contains a link to the first version with reproducer.
>>
>> > Does it still reproduce on latest rdma-next?
>>
>> syzbot does not test rdma-next.
>>
>> But can you reproduce it on upstream tree where syzbot hits it? Just
>> to make sure, you have ODEBUG enabled, right?
>
> Thanks for the tip, I added more debug options and returned to try the
> repro on clean v4.16-rc1, it looks like I still don't have all needed
> config options (SELinux).
>
> mount(selinuxfs) failed (errno 2)
It seems that you have old syzkaller sources, it now explicitly does
not fail on ENOENT:
// selinux mount used to be at /selinux, but then moved to /sys/fs/selinux.
const char* selinux_path = "./syz-tmp/newroot/selinux";
if (mount("/selinux", selinux_path, NULL, mount_flags, NULL)) {
if (errno != ENOENT)
fail("mount(/selinux) failed");
if (mount("/sys/fs/selinux", selinux_path, NULL, mount_flags,
NULL) && errno != ENOENT)
fail("mount(/sys/fs/selinux) failed");
}
> Can you please cancel my email that closes this bug?
There is no such option. The new bug is already created:
https://syzkaller.appspot.com/bug?id=6ecd3fdba0501c8843300056375abea0880816d2
now if we open the old one, we now have 2 for the same root cause. It
will lead to mess.
Powered by blists - more mailing lists