[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJ2QiJLWgxw0X8rkMhKgAGgiFS5xhrhMF5Dct_J791Kt-ys7QQ@mail.gmail.com>
Date: Mon, 19 Jul 2021 16:05:45 +0530
From: Prabhakar Kushwaha <prabhakar.pkin@...il.com>
To: Jia He <justin.he@....com>
Cc: Ariel Elior <aelior@...vell.com>, GR-everest-linux-l2@...vell.com,
"David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
nd@....com, Shai Malin <malin1024@...il.com>,
Shai Malin <smalin@...vell.com>,
Prabhakar Kushwaha <pkushwaha@...vell.com>
Subject: Re: [PATCH] qed: fix possible unpaired spin_{un}lock_bh in _qed_mcp_cmd_and_union()
Hi Jia,
On Thu, Jul 15, 2021 at 2:28 PM Jia He <justin.he@....com> wrote:
>
> Liajian reported a bug_on hit on a ThunderX2 arm64 server with FastLinQ
> QL41000 ethernet controller:
> BUG: scheduling while atomic: kworker/0:4/531/0x00000200
> [qed_probe:488()]hw prepare failed
> kernel BUG at mm/vmalloc.c:2355!
> Internal error: Oops - BUG: 0 [#1] SMP
> CPU: 0 PID: 531 Comm: kworker/0:4 Tainted: G W 5.4.0-77-generic #86-Ubuntu
> pstate: 00400009 (nzcv daif +PAN -UAO)
> Call trace:
> vunmap+0x4c/0x50
> iounmap+0x48/0x58
> qed_free_pci+0x60/0x80 [qed]
> qed_probe+0x35c/0x688 [qed]
> __qede_probe+0x88/0x5c8 [qede]
> qede_probe+0x60/0xe0 [qede]
> local_pci_probe+0x48/0xa0
> work_for_cpu_fn+0x24/0x38
> process_one_work+0x1d0/0x468
> worker_thread+0x238/0x4e0
> kthread+0xf0/0x118
> ret_from_fork+0x10/0x18
>
> In this case, qed_hw_prepare() returns error due to hw/fw error, but in
> theory work queue should be in process context instead of interrupt.
>
> The root cause might be the unpaired spin_{un}lock_bh() in
> _qed_mcp_cmd_and_union(), which causes botton half is disabled incorrectly.
>
> Reported-by: Lijian Zhang <Lijian.Zhang@....com>
> Signed-off-by: Jia He <justin.he@....com>
> ---
This patch is adding additional spin_{un}lock_bh().
Can you please enlighten about the exact flow causing this unpaired
spin_{un}lock_bh.
Also,
as per description, looks like you are not sure actual the root-cause.
does this patch really solved the problem?
--pk
Powered by blists - more mailing lists