[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3feda087-400b-428c-8604-7b993e5662bf@amd.com>
Date: Thu, 1 Aug 2024 16:07:23 -0700
From: "Nelson, Shannon" <shannon.nelson@....com>
To: Nick Child <nnac123@...ux.ibm.com>, netdev@...r.kernel.org
Cc: bjking1@...ux.ibm.com, haren@...ux.ibm.com, ricklind@...ibm.com
Subject: Re: [PATCH net-next 1/2] ibmveth: Optimize poll rescheduling process
On 8/1/2024 2:12 PM, Nick Child wrote:
>
> When the ibmveth driver processes less than the budget, it must call
> napi_complete_done() to release the instance. This function will
> return false if the driver should avoid rearming interrupts.
> Previously, the driver was ignoring the return code of
> napi_complete_done(). As a result, there were unnecessary calls to
> enable the veth irq.
> Therefore, use the return code napi_complete_done() to determine if
> irq rearm is necessary.
>
> Additionally, in the event that new data is received immediately after
> rearming interrupts, rather than just rescheduling napi, also jump
> back to the poll processing loop since we are already in the poll
> function (and know that we did not expense all of budget).
>
> This slight tweak results in a 15% increase in TCP_RR transaction rate
> (320k to 370k txns). We can see the ftrace data supports this:
> PREV: ibmveth_poll = 8818014.0 us / 182802.0 hits = AVG 48.24
> NEW: ibmveth_poll = 8082398.0 us / 191413.0 hits = AVG 42.22
>
> Signed-off-by: Nick Child <nnac123@...ux.ibm.com>
> ---
> drivers/net/ethernet/ibm/ibmveth.c | 28 +++++++++++++++-------------
> 1 file changed, 15 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
> index 4c9d9badd698..e6eb594f0751 100644
> --- a/drivers/net/ethernet/ibm/ibmveth.c
> +++ b/drivers/net/ethernet/ibm/ibmveth.c
> @@ -1337,6 +1337,7 @@ static int ibmveth_poll(struct napi_struct *napi, int budget)
> unsigned long lpar_rc;
> u16 mss = 0;
>
> +restart_poll:
> while (frames_processed < budget) {
> if (!ibmveth_rxq_pending_buffer(adapter))
> break;
> @@ -1420,24 +1421,25 @@ static int ibmveth_poll(struct napi_struct *napi, int budget)
>
> ibmveth_replenish_task(adapter);
>
> - if (frames_processed < budget) {
> - napi_complete_done(napi, frames_processed);
> + if (frames_processed == budget)
> + goto out;
>
> - /* We think we are done - reenable interrupts,
> - * then check once more to make sure we are done.
> - */
> - lpar_rc = h_vio_signal(adapter->vdev->unit_address,
> - VIO_IRQ_ENABLE);
> + if (!napi_complete_done(napi, frames_processed))
> + goto out;
>
> - BUG_ON(lpar_rc != H_SUCCESS);
> + /* We think we are done - reenable interrupts,
> + * then check once more to make sure we are done.
> + */
> + lpar_rc = h_vio_signal(adapter->vdev->unit_address, VIO_IRQ_ENABLE);
> + BUG_ON(lpar_rc != H_SUCCESS);
I know you're just moving this around, but maybe this is a good time to
get rid of the deprecated BUG_ON?
Otherwise this looks reasonable.
Reviewed-by: Shannon Nelson <shannon.nelson@....com>
>
> - if (ibmveth_rxq_pending_buffer(adapter) &&
> - napi_schedule(napi)) {
> - lpar_rc = h_vio_signal(adapter->vdev->unit_address,
> - VIO_IRQ_DISABLE);
> - }
> + if (ibmveth_rxq_pending_buffer(adapter) && napi_schedule(napi)) {
> + lpar_rc = h_vio_signal(adapter->vdev->unit_address,
> + VIO_IRQ_DISABLE);
> + goto restart_poll;
> }
>
> +out:
> return frames_processed;
> }
>
> --
> 2.43.0
>
>
Powered by blists - more mailing lists