[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZHRA0Ef6l9YwVDfE@nanopsycho>
Date: Mon, 29 May 2023 08:06:22 +0200
From: Jiri Pirko <jiri@...nulli.us>
To: Manish Chopra <manishc@...vell.com>
Cc: "kuba@...nel.org" <kuba@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Ariel Elior <aelior@...vell.com>, Alok Prasad <palok@...vell.com>,
Sudarsana Reddy Kalluru <skalluru@...vell.com>,
David Miller <davem@...emloft.net>
Subject: Re: [EXT] Re: [PATCH v5 net] qede: Fix scheduling while atomic
Thu, May 25, 2023 at 05:27:03PM CEST, manishc@...vell.com wrote:
>Hi Jiri,
>
>> -----Original Message-----
>> From: Jiri Pirko <jiri@...nulli.us>
>> Sent: Wednesday, May 24, 2023 5:01 PM
>> To: Manish Chopra <manishc@...vell.com>
>> Cc: kuba@...nel.org; netdev@...r.kernel.org; Ariel Elior
>> <aelior@...vell.com>; Alok Prasad <palok@...vell.com>; Sudarsana Reddy
>> Kalluru <skalluru@...vell.com>; David Miller <davem@...emloft.net>
>> Subject: [EXT] Re: [PATCH v5 net] qede: Fix scheduling while atomic
>>
>> External Email
>>
>> ----------------------------------------------------------------------
>> Tue, May 23, 2023 at 04:42:35PM CEST, manishc@...vell.com wrote:
>> >Bonding module collects the statistics while holding the spinlock,
>> >beneath that qede->qed driver statistics flow gets scheduled out due to
>> >usleep_range() used in PTT acquire logic which results into below bug
>> >and traces -
>> >
>> >[ 3673.988874] Hardware name: HPE ProLiant DL365 Gen10 Plus/ProLiant
>> >DL365 Gen10 Plus, BIOS A42 10/29/2021 [ 3673.988878] Call Trace:
>> >[ 3673.988891] dump_stack_lvl+0x34/0x44 [ 3673.988908]
>> >__schedule_bug.cold+0x47/0x53 [ 3673.988918] __schedule+0x3fb/0x560 [
>> >3673.988929] schedule+0x43/0xb0 [ 3673.988932]
>> >schedule_hrtimeout_range_clock+0xbf/0x1b0
>> >[ 3673.988937] ? __hrtimer_init+0xc0/0xc0 [ 3673.988950]
>> >usleep_range+0x5e/0x80 [ 3673.988955] qed_ptt_acquire+0x2b/0xd0 [qed]
>> >[ 3673.988981] _qed_get_vport_stats+0x141/0x240 [qed] [ 3673.989001]
>> >qed_get_vport_stats+0x18/0x80 [qed] [ 3673.989016]
>> >qede_fill_by_demand_stats+0x37/0x400 [qede] [ 3673.989028]
>> >qede_get_stats64+0x19/0xe0 [qede] [ 3673.989034]
>> >dev_get_stats+0x5c/0xc0 [ 3673.989045]
>> >netstat_show.constprop.0+0x52/0xb0
>> >[ 3673.989055] dev_attr_show+0x19/0x40 [ 3673.989065]
>> >sysfs_kf_seq_show+0x9b/0xf0 [ 3673.989076] seq_read_iter+0x120/0x4b0 [
>> >3673.989087] new_sync_read+0x118/0x1a0 [ 3673.989095]
>> >vfs_read+0xf3/0x180 [ 3673.989099] ksys_read+0x5f/0xe0 [ 3673.989102]
>> >do_syscall_64+0x3b/0x90 [ 3673.989109]
>> >entry_SYSCALL_64_after_hwframe+0x44/0xae
>>
>> You mention "bonding module" at the beginning of this description. Where
>> exactly is that shown in the trace?
>>
>> I guess that the "spinlock" you talk about is "dev_base_lock", isn't it?
>
>Bonding function somehow were not part of traces, but this is the flow from bonding module
>which calls dev_get_stats() under spin_lock_nested(&bond->stats_lock, nest_level) which results to this issue.
Trace you included is obviously from sysfs read. Either change the trace
or the description.
>
>>
>>
>> >[ 3673.989115] RIP: 0033:0x7f8467d0b082 [ 3673.989119] Code: c0 e9 b2
>> >fe ff ff 50 48 8d 3d ca 05 08 00 e8 35 e7 01 00 0f 1f 44 00 00 f3 0f 1e
>> >fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77 56
>> >c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 [ 3673.989121] RSP:
>> >002b:00007ffffb21fd08 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [
>> >3673.989127] RAX: ffffffffffffffda RBX: 000000000100eca0 RCX:
>> >00007f8467d0b082 [ 3673.989128] RDX: 00000000000003ff RSI:
>> >00007ffffb21fdc0 RDI: 0000000000000003 [ 3673.989130] RBP:
>> 00007f8467b96028 R08: 0000000000000010 R09: 00007ffffb21ec00 [
>> 3673.989132] R10: 00007ffffb27b170 R11: 0000000000000246 R12:
>> 00000000000000f0 [ 3673.989134] R13: 0000000000000003 R14:
>> 00007f8467b92000 R15: 0000000000045a05
>> >[ 3673.989139] CPU: 30 PID: 285188 Comm: read_all Kdump: loaded
>> Tainted: G W OE
[...]
Powered by blists - more mailing lists