[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BY3PR18MB46127EC56DF88024DEC65488AB659@BY3PR18MB4612.namprd18.prod.outlook.com>
Date: Wed, 26 Apr 2023 08:11:04 +0000
From: Manish Chopra <manishc@...vell.com>
To: Leon Romanovsky <leon@...nel.org>
CC: "kuba@...nel.org" <kuba@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Ariel Elior <aelior@...vell.com>,
Alok Prasad <palok@...vell.com>,
Sudarsana Reddy Kalluru <skalluru@...vell.com>,
"David S . Miller" <davem@...emloft.net>
Subject: RE: [EXT] Re: [PATCH net] qed/qede: Fix scheduling while atomic
Hi Leon,
> -----Original Message-----
> From: Leon Romanovsky <leon@...nel.org>
> Sent: Wednesday, April 26, 2023 12:06 PM
> To: Manish Chopra <manishc@...vell.com>
> Cc: kuba@...nel.org; netdev@...r.kernel.org; Ariel Elior
> <aelior@...vell.com>; Alok Prasad <palok@...vell.com>; Sudarsana Reddy
> Kalluru <skalluru@...vell.com>; David S . Miller <davem@...emloft.net>
> Subject: [EXT] Re: [PATCH net] qed/qede: Fix scheduling while atomic
>
> External Email
>
> ----------------------------------------------------------------------
> On Tue, Apr 25, 2023 at 05:25:48AM -0700, Manish Chopra wrote:
> > Bonding module collects the statistics while holding the spinlock,
> > beneath that qede->qed driver statistics flow gets scheduled out due
> > to usleep_range() used in PTT acquire logic which results into below
> > bug and traces -
> >
> > [ 3673.988874] Hardware name: HPE ProLiant DL365 Gen10 Plus/ProLiant
> > DL365 Gen10 Plus, BIOS A42 10/29/2021 [ 3673.988878] Call Trace:
> > [ 3673.988891] dump_stack_lvl+0x34/0x44 [ 3673.988908]
> > __schedule_bug.cold+0x47/0x53 [ 3673.988918] __schedule+0x3fb/0x560 [
> > 3673.988929] schedule+0x43/0xb0 [ 3673.988932]
> > schedule_hrtimeout_range_clock+0xbf/0x1b0
> > [ 3673.988937] ? __hrtimer_init+0xc0/0xc0 [ 3673.988950]
> > usleep_range+0x5e/0x80 [ 3673.988955] qed_ptt_acquire+0x2b/0xd0 [qed]
> > [ 3673.988981] _qed_get_vport_stats+0x141/0x240 [qed] [ 3673.989001]
> > qed_get_vport_stats+0x18/0x80 [qed] [ 3673.989016]
> > qede_fill_by_demand_stats+0x37/0x400 [qede] [ 3673.989028]
> > qede_get_stats64+0x19/0xe0 [qede] [ 3673.989034]
> > dev_get_stats+0x5c/0xc0 [ 3673.989045]
> > netstat_show.constprop.0+0x52/0xb0
> > [ 3673.989055] dev_attr_show+0x19/0x40 [ 3673.989065]
> > sysfs_kf_seq_show+0x9b/0xf0 [ 3673.989076] seq_read_iter+0x120/0x4b0
> > [ 3673.989087] new_sync_read+0x118/0x1a0 [ 3673.989095]
> > vfs_read+0xf3/0x180 [ 3673.989099] ksys_read+0x5f/0xe0 [ 3673.989102]
> > do_syscall_64+0x3b/0x90 [ 3673.989109]
> > entry_SYSCALL_64_after_hwframe+0x44/0xae
> > [ 3673.989115] RIP: 0033:0x7f8467d0b082 [ 3673.989119] Code: c0 e9 b2
> > fe ff ff 50 48 8d 3d ca 05 08 00 e8 35 e7 01 00 0f 1f 44 00 00 f3 0f
> > 1e fa 64 8b 04 25 18 00 00 00 85 c0 75 10 0f 05 <48> 3d 00 f0 ff ff 77
> > 56 c3 0f 1f 44 00 00 48 83 ec 28 48 89 54 24 [ 3673.989121] RSP:
> > 002b:00007ffffb21fd08 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 [
> > 3673.989127] RAX: ffffffffffffffda RBX: 000000000100eca0 RCX:
> > 00007f8467d0b082 [ 3673.989128] RDX: 00000000000003ff RSI:
> > 00007ffffb21fdc0 RDI: 0000000000000003 [ 3673.989130] RBP:
> 00007f8467b96028 R08: 0000000000000010 R09: 00007ffffb21ec00 [
> 3673.989132] R10: 00007ffffb27b170 R11: 0000000000000246 R12:
> 00000000000000f0 [ 3673.989134] R13: 0000000000000003 R14:
> 00007f8467b92000 R15: 0000000000045a05
> > [ 3673.989139] CPU: 30 PID: 285188 Comm: read_all Kdump: loaded
> Tainted: G W OE
> >
> > Fix this by having caller (QEDE driver flows) to provide the context
> > whether it could be in atomic context flow or not when getting the
> > vport stats from QED driver. QED driver based on the context provided
> > decide to schedule out or not when acquiring the PTT BAR window.
>
> And why don't you implement qed_ptt_acquire() to be atomic only?
>
> It will be much easier to do so instead of adding is_atomic in all the places.
qed_ptt_acquire() is quite crucial and delicate for HW access, throughout the driver it is used at many places and
from various different flows. Changing/Making it atomic completely for all the flows (even for the flows which are
non-atomic which is mostly 99.9% of all the flows except the .ndo_get_stats64() flow which could be atomic in bonding
configuration) sounds aggressive and I am afraid if it could introduce any sort of regressions in the driver as the impact
would be throughout all the driver flows. Currently there is only single functional flow (getting vport stats) which seems
to be demanding for qed_ptt_acquire() to be atomic so that's why it is done exclusively and keeping all other flows intact
in the driver from functional regression POV.
Thanks.
>
> Thanks
>
> >
> > Fixes: 133fac0eedc3 ("qede: Add basic ethtool support")
> > Cc: Sudarsana Kalluru <skalluru@...vell.com>
> > Cc: David S. Miller <davem@...emloft.net>
> > Signed-off-by: Manish Chopra <manishc@...vell.com>
> > Signed-off-by: Ariel Elior <aelior@...vell.com>
> > Signed-off-by: Alok Prasad <palok@...vell.com>
> > ---
> > drivers/net/ethernet/qlogic/qed/qed_dev_api.h | 12 +++++++-
> > drivers/net/ethernet/qlogic/qed/qed_hw.c | 28 +++++++++++++++----
> > drivers/net/ethernet/qlogic/qed/qed_l2.c | 11 ++++----
> > drivers/net/ethernet/qlogic/qed/qed_l2.h | 3 +-
> > drivers/net/ethernet/qlogic/qed/qed_main.c | 4 +--
> > drivers/net/ethernet/qlogic/qede/qede.h | 2 +-
> > .../net/ethernet/qlogic/qede/qede_ethtool.c | 2 +-
> > drivers/net/ethernet/qlogic/qede/qede_main.c | 6 ++--
> > include/linux/qed/qed_eth_if.h | 2 +-
> > 9 files changed, 50 insertions(+), 20 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
> > b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
> > index f8682356d0cf..5e15a6a506c8 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
> > @@ -182,7 +182,7 @@ int qed_hw_prepare(struct qed_dev *cdev, void
> > qed_hw_remove(struct qed_dev *cdev);
> >
> > /**
> > - * qed_ptt_acquire(): Allocate a PTT window.
> > + * qed_ptt_acquire(): Allocate a PTT window in sleepable context.
> > *
> > * @p_hwfn: HW device data.
> > *
> > @@ -193,6 +193,16 @@ void qed_hw_remove(struct qed_dev *cdev);
> > */
> > struct qed_ptt *qed_ptt_acquire(struct qed_hwfn *p_hwfn);
> >
> > +/**
> > + * @brief _qed_ptt_acquire - Allocate a PTT window based on the
> > +context
> > + *
> > + * @param p_hwfn
> > + * @param is_atomic - acquire ptt based on this context (sleepable
> > +or unsleepable)
> > + *
> > + * @return struct qed_ptt
> > + */
> > +struct qed_ptt *_qed_ptt_acquire(struct qed_hwfn *p_hwfn, bool
> > +is_atomic);
> > +
> > /**
> > * qed_ptt_release(): Release PTT Window.
> > *
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_hw.c
> > b/drivers/net/ethernet/qlogic/qed/qed_hw.c
> > index 554f30b0cfd5..4e8bfa0194e7 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_hw.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_hw.c
> > @@ -23,7 +23,10 @@
> > #include "qed_reg_addr.h"
> > #include "qed_sriov.h"
> >
> > -#define QED_BAR_ACQUIRE_TIMEOUT 1000
> > +#define QED_BAR_ACQUIRE_TIMEOUT_USLEEP_CNT 1000
> > +#define QED_BAR_ACQUIRE_TIMEOUT_USLEEP 1000
> > +#define QED_BAR_ACQUIRE_TIMEOUT_UDELAY_CNT 100000
> > +#define QED_BAR_ACQUIRE_TIMEOUT_UDELAY 10
> >
> > /* Invalid values */
> > #define QED_BAR_INVALID_OFFSET (cpu_to_le32(-1))
> > @@ -83,13 +86,18 @@ void qed_ptt_pool_free(struct qed_hwfn *p_hwfn)
> > p_hwfn->p_ptt_pool = NULL;
> > }
> >
> > -struct qed_ptt *qed_ptt_acquire(struct qed_hwfn *p_hwfn)
> > +struct qed_ptt *_qed_ptt_acquire(struct qed_hwfn *p_hwfn, bool
> > +is_atomic)
> > {
> > struct qed_ptt *p_ptt;
> > - unsigned int i;
> > + unsigned int i, count;
> > +
> > + if (is_atomic)
> > + count = QED_BAR_ACQUIRE_TIMEOUT_UDELAY_CNT;
> > + else
> > + count = QED_BAR_ACQUIRE_TIMEOUT_USLEEP_CNT;
> >
> > /* Take the free PTT from the list */
> > - for (i = 0; i < QED_BAR_ACQUIRE_TIMEOUT; i++) {
> > + for (i = 0; i < count; i++) {
> > spin_lock_bh(&p_hwfn->p_ptt_pool->lock);
> >
> > if (!list_empty(&p_hwfn->p_ptt_pool->free_list)) { @@ -
> 105,13
> > +113,23 @@ struct qed_ptt *qed_ptt_acquire(struct qed_hwfn *p_hwfn)
> > }
> >
> > spin_unlock_bh(&p_hwfn->p_ptt_pool->lock);
> > - usleep_range(1000, 2000);
> > +
> > + if (is_atomic)
> > + udelay(QED_BAR_ACQUIRE_TIMEOUT_UDELAY);
> > + else
> > + usleep_range(QED_BAR_ACQUIRE_TIMEOUT_USLEEP,
> > + QED_BAR_ACQUIRE_TIMEOUT_USLEEP *
> 2);
> > }
> >
> > DP_NOTICE(p_hwfn, "PTT acquire timeout - failed to allocate PTT\n");
> > return NULL;
> > }
> >
> > +struct qed_ptt *qed_ptt_acquire(struct qed_hwfn *p_hwfn) {
> > + return _qed_ptt_acquire(p_hwfn, false); }
> > +
> > void qed_ptt_release(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
> > {
> > spin_lock_bh(&p_hwfn->p_ptt_pool->lock);
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > index 2edd6bf64a3c..46d8d35dc7ac 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
> > @@ -1863,7 +1863,7 @@ static void __qed_get_vport_stats(struct
> > qed_hwfn *p_hwfn, }
> >
> > static void _qed_get_vport_stats(struct qed_dev *cdev,
> > - struct qed_eth_stats *stats)
> > + struct qed_eth_stats *stats, bool is_atomic)
> > {
> > u8 fw_vport = 0;
> > int i;
> > @@ -1872,7 +1872,7 @@ static void _qed_get_vport_stats(struct qed_dev
> > *cdev,
> >
> > for_each_hwfn(cdev, i) {
> > struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
> > - struct qed_ptt *p_ptt = IS_PF(cdev) ?
> qed_ptt_acquire(p_hwfn)
> > + struct qed_ptt *p_ptt = IS_PF(cdev) ?
> _qed_ptt_acquire(p_hwfn,
> > +is_atomic)
> > : NULL;
> > bool b_get_port_stats;
> >
> > @@ -1899,7 +1899,8 @@ static void _qed_get_vport_stats(struct qed_dev
> *cdev,
> > }
> > }
> >
> > -void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats
> > *stats)
> > +void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats
> *stats,
> > + bool is_atomic)
> > {
> > u32 i;
> >
> > @@ -1908,7 +1909,7 @@ void qed_get_vport_stats(struct qed_dev *cdev,
> struct qed_eth_stats *stats)
> > return;
> > }
> >
> > - _qed_get_vport_stats(cdev, stats);
> > + _qed_get_vport_stats(cdev, stats, is_atomic);
> >
> > if (!cdev->reset_stats)
> > return;
> > @@ -1960,7 +1961,7 @@ void qed_reset_vport_stats(struct qed_dev
> *cdev)
> > if (!cdev->reset_stats) {
> > DP_INFO(cdev, "Reset stats not allocated\n");
> > } else {
> > - _qed_get_vport_stats(cdev, cdev->reset_stats);
> > + _qed_get_vport_stats(cdev, cdev->reset_stats, false);
> > cdev->reset_stats->common.link_change_count = 0;
> > }
> > }
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.h
> > b/drivers/net/ethernet/qlogic/qed/qed_l2.h
> > index a538cf478c14..2bb93c50a2e4 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_l2.h
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_l2.h
> > @@ -249,7 +249,8 @@ qed_sp_eth_rx_queues_update(struct qed_hwfn
> *p_hwfn,
> > enum spq_mode comp_mode,
> > struct qed_spq_comp_cb *p_comp_data);
> >
> > -void qed_get_vport_stats(struct qed_dev *cdev, struct qed_eth_stats
> > *stats);
> > +void qed_get_vport_stats(struct qed_dev *cdev,
> > + struct qed_eth_stats *stats, bool is_atomic);
> >
> > void qed_reset_vport_stats(struct qed_dev *cdev);
> >
> > diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c
> > b/drivers/net/ethernet/qlogic/qed/qed_main.c
> > index c91898be7c03..307856c4ed22 100644
> > --- a/drivers/net/ethernet/qlogic/qed/qed_main.c
> > +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
> > @@ -3101,7 +3101,7 @@ void qed_get_protocol_stats(struct qed_dev
> > *cdev,
> >
> > switch (type) {
> > case QED_MCP_LAN_STATS:
> > - qed_get_vport_stats(cdev, ð_stats);
> > + qed_get_vport_stats(cdev, ð_stats, false);
> > stats->lan_stats.ucast_rx_pkts =
> > eth_stats.common.rx_ucast_pkts;
> > stats->lan_stats.ucast_tx_pkts =
> > @@ -3161,7 +3161,7 @@ qed_fill_generic_tlv_data(struct qed_dev *cdev,
> struct qed_mfw_tlv_generic *tlv)
> > }
> > }
> >
> > - qed_get_vport_stats(cdev, &stats);
> > + qed_get_vport_stats(cdev, &stats, false);
> > p_common = &stats.common;
> > tlv->rx_frames = p_common->rx_ucast_pkts + p_common-
> >rx_mcast_pkts +
> > p_common->rx_bcast_pkts;
> > diff --git a/drivers/net/ethernet/qlogic/qede/qede.h
> > b/drivers/net/ethernet/qlogic/qede/qede.h
> > index f90dcfe9ee68..312b1c2484fe 100644
> > --- a/drivers/net/ethernet/qlogic/qede/qede.h
> > +++ b/drivers/net/ethernet/qlogic/qede/qede.h
> > @@ -569,7 +569,7 @@ void qede_set_udp_tunnels(struct qede_dev *edev);
> > void qede_reload(struct qede_dev *edev,
> > struct qede_reload_args *args, bool is_locked); int
> > qede_change_mtu(struct net_device *dev, int new_mtu); -void
> > qede_fill_by_demand_stats(struct qede_dev *edev);
> > +void qede_fill_by_demand_stats(struct qede_dev *edev, bool
> > +is_atomic);
> > void __qede_lock(struct qede_dev *edev); void __qede_unlock(struct
> > qede_dev *edev); bool qede_has_rx_work(struct qede_rx_queue *rxq);
> > diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
> > b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
> > index 8034d812d5a0..7e40e35d990c 100644
> > --- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
> > +++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
> > @@ -408,7 +408,7 @@ static void qede_get_ethtool_stats(struct
> net_device *dev,
> > struct qede_fastpath *fp;
> > int i;
> >
> > - qede_fill_by_demand_stats(edev);
> > + qede_fill_by_demand_stats(edev, false);
> >
> > /* Need to protect the access to the fastpath array */
> > __qede_lock(edev);
> > diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c
> > b/drivers/net/ethernet/qlogic/qede/qede_main.c
> > index 953f304b8588..6c4187e5faa5 100644
> > --- a/drivers/net/ethernet/qlogic/qede/qede_main.c
> > +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
> > @@ -301,12 +301,12 @@ module_exit(qede_cleanup); static int
> > qede_open(struct net_device *ndev); static int qede_close(struct
> > net_device *ndev);
> >
> > -void qede_fill_by_demand_stats(struct qede_dev *edev)
> > +void qede_fill_by_demand_stats(struct qede_dev *edev, bool is_atomic)
> > {
> > struct qede_stats_common *p_common = &edev->stats.common;
> > struct qed_eth_stats stats;
> >
> > - edev->ops->get_vport_stats(edev->cdev, &stats);
> > + edev->ops->get_vport_stats(edev->cdev, &stats, is_atomic);
> >
> > p_common->no_buff_discards = stats.common.no_buff_discards;
> > p_common->packet_too_big_discard =
> > stats.common.packet_too_big_discard;
> > @@ -413,7 +413,7 @@ static void qede_get_stats64(struct net_device
> *dev,
> > struct qede_dev *edev = netdev_priv(dev);
> > struct qede_stats_common *p_common;
> >
> > - qede_fill_by_demand_stats(edev);
> > + qede_fill_by_demand_stats(edev, true);
> > p_common = &edev->stats.common;
> >
> > stats->rx_packets = p_common->rx_ucast_pkts +
> > p_common->rx_mcast_pkts + diff --git a/include/linux/qed/qed_eth_if.h
> > b/include/linux/qed/qed_eth_if.h index e1bf3219b4e6..f2893b6b4cb3
> > 100644
> > --- a/include/linux/qed/qed_eth_if.h
> > +++ b/include/linux/qed/qed_eth_if.h
> > @@ -319,7 +319,7 @@ struct qed_eth_ops {
> > struct eth_slow_path_rx_cqe *cqe);
> >
> > void (*get_vport_stats)(struct qed_dev *cdev,
> > - struct qed_eth_stats *stats);
> > + struct qed_eth_stats *stats, bool is_atomic);
> >
> > int (*tunn_config)(struct qed_dev *cdev,
> > struct qed_tunn_params *params);
> > --
> > 2.27.0
> >
Powered by blists - more mailing lists