[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201116080457.163bf83b@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date: Mon, 16 Nov 2020 08:04:57 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Björn Töpel <bjorn.topel@...il.com>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
Björn Töpel
<bjorn.topel@...el.com>, magnus.karlsson@...el.com, ast@...nel.org,
daniel@...earbox.net, maciej.fijalkowski@...el.com,
sridhar.samudrala@...el.com, jesse.brandeburg@...el.com,
qi.z.zhang@...el.com, edumazet@...gle.com,
jonathan.lemon@...il.com, maximmi@...dia.com
Subject: Re: [PATCH bpf-next v2 01/10] net: introduce preferred busy-polling
On Mon, 16 Nov 2020 12:04:07 +0100 Björn Töpel wrote:
> @@ -6771,6 +6806,19 @@ static int napi_poll(struct napi_struct *n, struct list_head *repoll)
> if (likely(work < weight))
> goto out_unlock;
>
> + /* The NAPI context has more processing work, but busy-polling
> + * is preferred. Exit early.
> + */
> + if (napi_prefer_busy_poll(n)) {
> + if (napi_complete_done(n, work)) {
> + /* If timeout is not set, we need to make sure
> + * that the NAPI is re-scheduled.
> + */
> + napi_schedule(n);
> + }
> + goto out_unlock;
> + }
Why is this before the disabled check?
> /* Drivers must not modify the NAPI state if they
> * consume the entire weight. In such cases this code
> * still "owns" the NAPI instance and therefore can
Powered by blists - more mailing lists