[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230502111913.GA525452@unreal>
Date: Tue, 2 May 2023 14:19:13 +0300
From: Leon Romanovsky <leon@...nel.org>
To: Manish Chopra <manishc@...vell.com>
Cc: "kuba@...nel.org" <kuba@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Ariel Elior <aelior@...vell.com>,
Alok Prasad <palok@...vell.com>,
Sudarsana Reddy Kalluru <skalluru@...vell.com>,
"David S . Miller" <davem@...emloft.net>
Subject: Re: [EXT] Re: [PATCH net] qed/qede: Fix scheduling while atomic
On Wed, Apr 26, 2023 at 08:11:04AM +0000, Manish Chopra wrote:
> Hi Leon,
>
> > -----Original Message-----
> > From: Leon Romanovsky <leon@...nel.org>
> > Sent: Wednesday, April 26, 2023 12:06 PM
> > To: Manish Chopra <manishc@...vell.com>
> > Cc: kuba@...nel.org; netdev@...r.kernel.org; Ariel Elior
> > <aelior@...vell.com>; Alok Prasad <palok@...vell.com>; Sudarsana Reddy
> > Kalluru <skalluru@...vell.com>; David S . Miller <davem@...emloft.net>
> > Subject: [EXT] Re: [PATCH net] qed/qede: Fix scheduling while atomic
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On Tue, Apr 25, 2023 at 05:25:48AM -0700, Manish Chopra wrote:
> > > Bonding module collects the statistics while holding the spinlock,
> > > beneath that qede->qed driver statistics flow gets scheduled out due
> > > to usleep_range() used in PTT acquire logic which results into below
> > > bug and traces -
> > >
> > > [ 3673.988874] Hardware name: HPE ProLiant DL365 Gen10 Plus/ProLiant
> > > DL365 Gen10 Plus, BIOS A42 10/29/2021 [ 3673.988878] Call Trace:
> > > [ 3673.988891] dump_stack_lvl+0x34/0x44 [ 3673.988908]
> > > __schedule_bug.cold+0x47/0x53 [ 3673.988918] __schedule+0x3fb/0x560 [
> > > 3673.988929] schedule+0x43/0xb0 [ 3673.988932]
> > > schedule_hrtimeout_range_clock+0xbf/0x1b0
> > > [ 3673.988937] ? __hrtimer_init+0xc0/0xc0 [ 3673.988950]
> > > usleep_range+0x5e/0x80 [ 3673.988955] qed_ptt_acquire+0x2b/0xd0 [qed]
> > > [ 3673.988981] _qed_get_vport_stats+0x141/0x240 [qed] [ 3673.989001]
> > > qed_get_vport_stats+0x18/0x80 [qed] [ 3673.989016]
> > > qede_fill_by_demand_stats+0x37/0x400 [qede] [ 3673.989028]
> > > qede_get_stats64+0x19/0xe0 [qede] [ 3673.989034]
> > > dev_get_stats+0x5c/0xc0 [ 3673.989045]
> > > netstat_show.constprop.0+0x52/0xb0
> > > [ 3673.989055] dev_attr_show+0x19/0x40 [ 3673.989065]
> > > sysfs_kf_seq_show+0x9b/0xf0 [ 3673.989076] seq_read_iter+0x120/0x4b0
> > > [ 3673.989087] new_sync_read+0x118/0x1a0 [ 3673.989095]
> > > vfs_read+0xf3/0x180 [ 3673.989099] ksys_read+0x5f/0xe0 [ 3673.989102]
> > > do_syscall_64+0x3b/0x90 [ 3673.989109]
> > > entry_SYSCALL_64_after_hwframe+0x44/0xae
> > > [ 3673.989115] RIP: 0033:0x7f8467d0b082 [ 3673.989119] Code: c0 e9 b2
> > > fe ff ff 50 48 8d 3d ca 05 08 00 e8 35 e7 01 00 0f 1f 44 00 00 f3 0f
> > > 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77
> > > 56 c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 [ 3673.989121] RSP:
> > > 002b:00007ffffb21fd08 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [
> > > 3673.989127] RAX: ffffffffffffffda RBX: 000000000100eca0 RCX:
> > > 00007f8467d0b082 [ 3673.989128] RDX: 00000000000003ff RSI:
> > > 00007ffffb21fdc0 RDI: 0000000000000003 [ 3673.989130] RBP:
> > 00007f8467b96028 R08: 0000000000000010 R09: 00007ffffb21ec00 [
> > 3673.989132] R10: 00007ffffb27b170 R11: 0000000000000246 R12:
> > 00000000000000f0 [ 3673.989134] R13: 0000000000000003 R14:
> > 00007f8467b92000 R15: 0000000000045a05
> > > [ 3673.989139] CPU: 30 PID: 285188 Comm: read_all Kdump: loaded
> > Tainted: G W OE
> > >
> > > Fix this by having caller (QEDE driver flows) to provide the context
> > > whether it could be in atomic context flow or not when getting the
> > > vport stats from QED driver. QED driver based on the context provided
> > > decide to schedule out or not when acquiring the PTT BAR window.
> >
> > And why don't you implement qed_ptt_acquire() to be atomic only?
> >
> > It will be much easier to do so instead of adding is_atomic in all the places.
>
> qed_ptt_acquire() is quite crucial and delicate for HW access, throughout the driver it is used at many places and
> from various different flows. Changing/Making it atomic completely for all the flows (even for the flows which are
> non-atomic which is mostly 99.9% of all the flows except the .ndo_get_stats64() flow which could be atomic in bonding
> configuration) sounds aggressive and I am afraid if it could introduce any sort of regressions in the driver as the impact
> would be throughout all the driver flows. Currently there is only single functional flow (getting vport stats) which seems
> to be demanding for qed_ptt_acquire() to be atomic so that's why it is done exclusively and keeping all other flows intact
> in the driver from functional regression POV.
Sorry, but I don't understand about which regression you are talking.
Your change to support atomic was change from usleep_range to be udelay.
+ if (is_atomic)
+ udelay(QED_BAR_ACQUIRE_TIMEOUT_UDELAY);
+ else
+ usleep_range(QED_BAR_ACQUIRE_TIMEOUT_USLEEP,
+ QED_BAR_ACQUIRE_TIMEOUT_USLEEP * 2);
Also in documentation, there is a section about it.
Documentation/networking/statistics.rst:
200 The `.ndo_get_stats64` callback can not sleep because of accesses
201 via `/proc/net/dev`. If driver may sleep when retrieving the statistics
202 from the device it should do so periodically asynchronously and only return
203 a recent copy from `.ndo_get_stats64`. Ethtool interrupt coalescing interface
204 allows setting the frequency of refreshing statistics, if needed.
Thanks
Powered by blists - more mailing lists