[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADEbmW1kvoqs3hAnPsrFRB3Emyf94_0WL=jt1QN+awZPCE50Cg@mail.gmail.com>
Date: Mon, 3 Apr 2023 15:42:13 +0200
From: Michal Schmidt <mschmidt@...hat.com>
To: Simon Horman <simon.horman@...igine.com>
Cc: intel-wired-lan@...ts.osuosl.org, netdev@...r.kernel.org,
Jesse Brandeburg <jesse.brandeburg@...el.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Michal Michalik <michal.michalik@...el.com>,
Arkadiusz Kubalewski <arkadiusz.kubalewski@...el.com>,
Karol Kolacinski <karol.kolacinski@...el.com>,
Petr Oros <poros@...hat.com>
Subject: Re: [PATCH net-next 2/4] ice: sleep, don't busy-wait, for sq_cmd_timeout
On Sun, Apr 2, 2023 at 1:18 PM Simon Horman <simon.horman@...igine.com> wrote:
> On Sat, Apr 01, 2023 at 07:26:57PM +0200, Michal Schmidt wrote:
> > The driver polls for ice_sq_done() with a 100 µs period for up to 1 s
> > and it uses udelay to do that.
> >
> > Let's use usleep_range instead. We know sleeping is allowed here,
> > because we're holding a mutex (cq->sq_lock). To preserve the total
> > max waiting time, measure cq->sq_cmd_timeout in jiffies.
> >
> > The sq_cmd_timeout is referenced also in ice_release_res(), but there
> > the polling period is 1 ms (i.e. 10 times longer). Since the timeout
> > was expressed in terms of the number of loops, the total timeout in this
> > function is 10 s. I do not know if this is intentional. This patch keeps
> > it.
> >
> > The patch lowers the CPU usage of the ice-gnss-<dev_name> kernel thread
> > on my system from ~8 % to less than 1 %.
> > I saw a report of high CPU usage with ptp4l where the busy-waiting in
> > ice_sq_send_cmd dominated the profile. The patch should help with that.
> >
> > Signed-off-by: Michal Schmidt <mschmidt@...hat.com>
> > ---
> > drivers/net/ethernet/intel/ice/ice_common.c | 14 +++++++-------
> > drivers/net/ethernet/intel/ice/ice_controlq.c | 9 +++++----
> > drivers/net/ethernet/intel/ice/ice_controlq.h | 2 +-
> > 3 files changed, 13 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
> > index c2fda4fa4188..14cffe49fa8c 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_common.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_common.c
> > @@ -1992,19 +1992,19 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
> > */
> > void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
> > {
> > - u32 total_delay = 0;
> > + unsigned long timeout;
> > int status;
> >
> > - status = ice_aq_release_res(hw, res, 0, NULL);
> > -
> > /* there are some rare cases when trying to release the resource
> > * results in an admin queue timeout, so handle them correctly
> > */
> > - while ((status == -EIO) && (total_delay < hw->adminq.sq_cmd_timeout)) {
> > - mdelay(1);
> > + timeout = jiffies + 10 * hw->adminq.sq_cmd_timeout;
>
> Not needed for this series. But it occurs to me that a clean-up would be to
> use ICE_CTL_Q_SQ_CMD_TIMEOUT directly and remove the sq_cmd_timeout field,
> as it seems to be only set to that constant.
Simon,
You are right. I can do that in v2.
BTW, i40e and iavf are similar to ice here.
Thanks,
Michal
Powered by blists - more mailing lists