[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACKFLikqAgFP3AHeWdagRisR7EHDD6E72_cbcT0afKMOAWTJeg@mail.gmail.com>
Date: Tue, 19 Dec 2017 11:25:29 -0800
From: Michael Chan <michael.chan@...adcom.com>
To: Marcelo Ricardo Leitner <marcelo.leitner@...il.com>
Cc: David Miller <davem@...emloft.net>,
Netdev <netdev@...r.kernel.org>,
Andrew Gospodarek <andrew.gospodarek@...adcom.com>
Subject: Re: [PATCH net-next v5 0/5] Introduce NETIF_F_GRO_HW
On Tue, Dec 19, 2017 at 11:04 AM, Marcelo Ricardo Leitner
<marcelo.leitner@...il.com> wrote:
> Can we clarify on the meaning/expectations of dev_weight? The
> documentation currently says:
> The maximum number of packets that kernel can handle on a NAPI
> interrupt, it's a Per-CPU variable.
>
> I believe 'packets' here refers to packets on the wire.
>
> For drivers doing LRO, we don't have visibility on how many
> packets were aggregated so they count as 1, aggregated or not.
>
> But for GRO_HW, drivers implementing it will get a bonus on its
> dev_weight because instead of pulling 5 packets in a cycle to create 1
> gro'ed skb, it will pull 1 big packet (which includes 5) and count it
> as 1.
>
Right, as I replied to you earlier, it's very simple to make this
adjustment for GRO_HW packets in the driver. I will make this change
for bnxt_en in my next net-next patchset and I will update the
dev_weight documentation as well.
Powered by blists - more mailing lists